Should you hire a COO?


10 years ago, if you were the founder of a high-growth company it was reasonably likely that your investors would want to bring in “adult supervision” as CEO to run your company.  This has shifted more recently with a slate of COO hirings following the example set by Facebook with the successful run of Sheryl Sandberg.

BoxFacebookFoursquare[1], StripeSquare, Twitter[2] and Yelp are all companies that chose to hire a COO as a complement to the founders, rather than replace the CEO with a “grey haired professional operator”[3].

Why A COO?

A COO is not about the title, but rather the background and experience you are looking for.  Optimally, you want someone who will come in to complement, operationalize and execute your vision as a founder.  Many technical or product focused founders want to (and should) remain focused on the product and overall market strategy.  In parallel the COO would build out and manage areas that the founders lack bandwidth, interest or experience in.  E.g.:

  • Add executive bandwidth and a business partner for technical or product focused founders.  
  • Scaling the company.  High growth companies have special needs around scaling and implementing simple processes (e.g. recruiting infrastructure, corporate governance, etc).
  • Build out executive team and organizational scaffold.  Hire executives or teams for areas founders don’t understand well e.g. finance, accounting, and sales.  Help in screening and hiring executives for product, engineering, marketing as well.
  • Take on the areas founders don’t have time for, are poorly suited for, or don’t want to focus on.  Ongoing management of the “business side” (corporate development/M&A, business development, sales, HR, recruiting, etc.) while the founders continue to focus on product, design, and engineering – e.g. Mark Zuckerberg’s focus on product at Facebook.
  • Shape the culture for the next phase of the company’s life.  Sheryl Sandberg has impacted how Facebook is run across the entire organization by, for example, bringing a culture of people development and managerial excellence. 

Why Not A COO?

All growing companies need to build out an executive team and the ability and expertise to scale.  This can be done via hiring or promoting a set of people who in their sum complement the founders and allow the company to scale rapidly and effectively.  It is not necessary that one of these team members have the “COO” title.  For example, at Polyvore the CFO owns multiple areas beyond traditional finance.

Additionally, the COO title sets a very high bar for the person.  You can’t really hire above them later like you could with a VP, taking flexibility out of future organizational evolution and changes as the company goes from e.g. 100 to 5000 people.  If the COO is out of their depth they often won’t accept demotion to VP and will leave instead.

Another option raised by Reid Hoffman is to replace yourself as CEO instead.

How Do You Choose A COO?

For COO you optimally want someone strong enough to be CEO of a company or with solid general management or key functional experience.  For example Sheryl Sandberg interviewed for other CEO roles before accepting COO of Facebook.  Similarly, Box’s COO Dan Levin was a CEO or President of 2 companies and a GM at Intuit before joining Box. You want someone so excited by your company’s vision and opportunity that they are willing to give up some of the perceived upside of being a general manager or CEO elsewhere to join your company.  

Criteria to look for are:

  • Seasoned executive willing to suppress their own ego to partner with, and execute, a founder’s vision.
  • Chemistry with founders.
  • Past experience scaling a company or organization.
  • Entrepreneurial.  Optimally, you want someone who has both operated at scale, as well as has startup experience (or scaled something from scratch at a larger company).
  • Functional expertise.  They should have previously run (at another company) a reasonable subset of the functions you want them to own initially at your company.
  • Ability to hire.  They will be building out a chunk of your company’s organizational skeleton.  You need someone who can hire well and manage executives themselves.
  • Someone you can learn from.  Optimally as a first time founder or manager the COO can teach you about management or other areas.
  • Process focus.  Bring lightweight processes or best practices from other companies, and be smart about how to tailor the necessary ones for use by your company.  Sheryl Sandberg did this in many ways at FB.
When hiring the COO, you should have a clear sense of what you want to keep as founder (e.g. design, product, marketing, engineering) and what you are willing to truly delegate (e.g. BD, sales, Corporate Development, finance, HR, operations, etc.).  Otherwise you may not set the situation up for success to being with.

I don’t think every company needs a COO.  Rather, a well rounded executive or leadership team allows you to do without one.  However, if you do decide to hire a COO the criteria above may be of use.


[1] The original title for the Foursquare COOs was “general manager”.

[2] Twitter has recently re-added the COO role after a period without one.  The original COO, Dick Costolo was promoted from COO to CEO.

[3] Other interesting early examples include Microsoft, where Bill Gates in the 1980s had seasoned “Presidents” working for him, and Oracle, where Larry Ellison has gone through various COOs over the years.  Gates, of course, only brought in venture money after Microsoft was profitable, so he had sufficient control of the company to not worry about being replaced.

[4] Extra thanks to Elad Gil for the ideas which I shamelessly ripped off for this post.


Chris Dixon on “The Credentials Trap”


Following my decision to share most (if not all) of the materials from the tech stuff I read, today, I’m sharing something I came across on Chris Dixon’s blog via the Startup Digest Reading list. It’s a topic that resonates with me even because I’m currently stuck working for another man but I have plans regarding establishing my own startup.
I hope it’s a good read.
I talk a lot to people who are deciding between startups and established companies. They’re usually early in their careers and have been exclusively affiliated with well-known schools and companies. As a result, they’re accustomed to praise from family and friends. Going to a startup is scary, as Jessica Livingstone, cofounder of Y Combinator, describes:
Everyone you encounter will have doubts about what you’re doing—investors, potential employees, reporters, your family and friends. What you don’t realize until you start a startup is how much external validation you’ve gotten for the conservative choices you’ve made in the past. You go to college and everyone says, “Great!” Then you graduate get a job at Google and everyone says, “Great!”
But optimizing for external validation is a dangerous trap. You’re fighting over a fixed pie against well-credentialed peers. The most likely outcome is a middle management job where you’ll have little impact and never seriously attempt to realize your ambitions. Peter Thiel’s personal experience illustrates this well:
By graduation, students at Stanford Law and other elite law schools have been racking up credentials and awards for well over a dozen years. The pinnacle of post law school credentialism is landing a Supreme Court clerkship. After graduating from SLS in ’92 and clerking for a year on the 11th Circuit, Peter Thiel was one of the small handful of clerks who made it to the interview stage with two of the Justices. That capstone credential was within reach. Peter was so close to winning that last competition. There was a sense that, if only he’d get the nod, he’d be set for life. But he didn’t.
Years later, after Peter built and sold PayPal, he reconnected with an old friend from SLS. The first thing the friend said was, “So, aren’t you glad you didn’t get that Supreme Court clerkship?” It was a funny question. At the time, it seemed much better to be chosen than not chosen. But there are many reasons to doubt whether winning that last competition would have been so good after all. Probably it would have meant a future of more insane competition. And no PayPal. The pithy, wry version of this is the line about Rhodes Scholars: they all had a great future in their past.
Great institutions can prepare you for great things. Credentials can open doors. But don’t let them become an end in themselves.

Getting a developer for your “wonderful” ideas


It happens all the time: someone thinks they have a great idea and all they need is a developer to implement it and internet riches will roll in. Recently someone sent me a (nice and very reasonable) email about how they had been developing their idea and now were looking for developers. Their efforts to reach out to programmers on GitHub hadn’t been very successful and they were wondering how to proceed.
This is what I said:
To answer your question on meeting developers, it’s complicated. I think reaching out to people on GitHub is a nice impulse: they’re developers that are making cool stuff and open to being somewhat out there in public. And, it shows that you understand how (many) developers work and are coming to them.
However, you’re correct that many developers are hesitant to join startups like that. Frankly, ideas are easy, it’s execution that’s hard. I think that I’m like many freelance developers: I get many people coming to me with their ‘great idea’ that will be oh so simple for me to build and I should definitely build it in return for a portion of the revenue.
Unfortunately things are rarely (never?) that simple. First, they often dramatically underestimate the amount of work it’ll take. This leads to the second problem, they dramatically undervalue the programmer’s time and skill. They both think that less skill is required (so the programmer doesn’t deserve much compensation) and that less time is required (effectively creating a very low hourly rate for the programmer, given the flat rates that are often proposed).
Finally, if a revenue share is bring proposed, that essentially means that the programmer is trusting that the product will sell well, including that the other person will do marketing and promotion necessary to make it a success (the amount of which is necessary is often underestimated by the idea person). In the best case this means the developer is waiting months to get paid and at worst not at all. Having already handed over the code for the product to go to market, the developer has essentially no leverage to ensure that they get paid.
Finally there’s the question of the opportunity cost for the developer. First, if the developer is working for a reduced rate compared to their regular clients, that’s obviously money they’re not making that they normally would. Of course, in theory the project is much more fun and interesting and world-changing than their normal gigs. However, I find that’s not often the case.
That relates to another issue, which is that often the ideas are unrealistic. I’ve found that, partially thanks to the variety of projects I’ve worked on as a freelancer, I have a much better sense of the problems inherent in an idea, both as a product and as a business, than the person presenting it to me.
Given all this and considering that developers are creative people themselves who could be spending their unpaid time on their own projects, the final opportunity cost is simply that it would take time from their own potentially world-changing project.
So, it takes a lot to convince a developer to join your project! For all the reasons above, developers like me are understandably wary of people coming to them with lots of ideas but little money. Now, I don’t know at all if you’re guilty of those things but unfortunately your messages probably triggered developers’ thoughts of all that bullshit! Bummer, isn’t it!
So, my first recommendation would be to visit a bunch of in-person developer events. That way you can form a nice personal connection with developers and see who’s active in Amsterdam.
Second, why not learn to make it yourselves? Without knowing what you want to do I can’t say how hard it’d be, but often it’s a lot easier than you might think. And I and many others at the programming meetups are happy to answer questions and help new programmers. Just speaking from my own experience, I’ve always found it easier to learn a new programming language or tool when I had a specific project I wanted to learn it for.
culled from:

Programmer Interrupted


I’m writing this post in an apt state: low-sleep, busy, disorientated, and interrupted. I try all the remedies: Pomodoroworking in coffee shopsheadphones, and avoiding work until being distraction free in the late night.
But it is only so long before interruption finds a way to pierce my protective bubble. Like you, I am programmer, interrupted. Unfortunately, our understanding of interruption and remedies for them are not too far from homeopathic cures and bloodletting leeches.
But what is the evidence and what can we do about it? Every few months I still see programmers who are asked to not use headphones during work hours or are interrupted by meetings too frequently but have little defense against these claims. I also fear our declining ability to handle these mental workloads and interruptions as we age.
The costs of interruptions have been studied in office environments. An interrupted task is estimated to take twice as long and contain twice as many errors as uninterrupted tasks (Czerwinski:04). Workers have to work in a fragmented state as 57% of tasks are interrupted (Mark:05).
For programmers, there is less evidence of the effects and prevalence of interruptions. Typically, the number that gets tossed around for getting back into the “zone” is at least 15 minutes after an interruption. Interviews with programmers produce a similiar number (vanSolingen:98). Nevertheless, numerous figures have weighed in: Paul Graham stresses the differences between a maker’s schedule and manager’s schedule. Jason Fried says the office iswhere we go to get interrupted.

Interruptions of Programmers

Based on a analysis of 10,000 programming sessions recorded from 86 programmers using Eclipse and Visual Studio and a survey of 414 programmers (Parnin:10), we found:
  • A programmer takes between 10-15 minutes to start editing code after resuming work from an interruption.
  • When interrupted during an edit of a method, only 10% of times did a programmer resume work in less than a minute.
  • A programmer is likely to get just one uninterrupted 2-hour session in a day
We also looked at some of the ways programmers coped with interruption:
  • Most sessions programmers navigated to several locations to rebuild context before resuming an edit.
  • Programmers insert intentional compile errors to force a “roadblock” reminder.
  • A source diff is seen as a last resort way to recover state but can be cumbersome to review

Worst Time to Interrupt a Programmer

Research shows that the worst time to interrupt anyone is when they have the highest memory load. Using neural correlates for memory load, such as pupillometry, studies have shown that interruptions during peak loads cause the biggest disruption(Iqbal:04).
We looked at subvocal utterances during a programming tasks to find different levels of memory load during programming tasks (Parnin:11).
If an interrupted person is allowed to suspend their working state or reach a “good breakpoint”, then the impact of the interruption can be reduced (Trafton:03). However, programmers often need at least 7 minutes before they transition from a high memory state to low memory state (Iqbal:07). An experiment evaluating which state a programmer less desired an interruption found these states to be especially problematic (Fogarty:05):
  • During an edit, especially with concurrent edits in multiple locations.
  • Navigation and search activities.
  • Comprehending data flow and control flow in code.
  • IDE window is out of focus.

A Memory-Supported Programming Environment

Ultimately, we cannot eliminate interruptions. In some cases interruption may even be beneficial. But we can find ways to reduce the impact on the memory failures that often result from interruption. I introduce some types of memory that get disrupted or heavily burdened during programming and some conceptual aids that can support them.

Prospective Memory

Prospective memory holds reminders to perform future actions in specific circumstances e.g. to buy milk on the way home from work (PM).
Various studies have described how developers have tried to cope with prospective memory failures. For example, developers often leave TODO comments in the code they are working on (Storey:08). A drawback of this mechanism is that there is no impetus for viewing these reminders. Instead, to force a prospective prompt, developers may intentionally leave a compile error to ensure they remember to perform a task (Parnin:10). A problem with compile errors is that they inhibit the ability to switch to another task on the same codebase. Finally, developers also do what other office workers do: leave sticky notes and emails to themselves (Parnin:10).
smart reminder is reminder that can be triggered based on conditions such as a teammate checking in code, or proximity to a reminderSmartReminder

Attentive Memory

Attentive memory holds conscious memories that can be freely attended to.
Some programming tasks require developers to make similar changes across a codebase. For example, if a developer needs to refactor code in order to move a component from one location to another or to update the code to use a new version of an API, then that developer needs to systematically and carefully edit all those locations affected by the desired change. Unfortunately, even a simple change can lead to many complications, requiring the developer to track the status of many locations in the code. Even worse, after an interruption to such as task, the tracked statuses in attentive memory quickly evaporate and the numerous visited and edited locations confound retrieval.
Touch points allow a programmer to track status across many locations in code.

Associative Memory

Associative memory holds a set of non-conscious links between manifestations of co-occurring stimuli.
Developers commonly experience disorientation when navigating to unfamiliar code. The disorientation stems from associative memory failures that arise when a developer must recall information about the places of code they are viewing or what to view next. Researchers believe the lack of rich and stable environmental cues in interface elements, such as document tabs, prevent associative memories from being recalled.
The presence of multiply modalities in a stimulus increases the ability to form an associative memory. In this sense, a modality refers to distinct type of perceptions that is processed by a distinct regions of the brain, such as auditory or visual pathways. Examples of different modalities include: lexical, spatial, operational, and structural. When multiple modalities are present in the same stimulus, more pathways are activated, thus increasing the chance of forming an associative memory. In contrast, a monotonous stimulus with a single modality is less likely to form an associative memory.
An associative link helps a programmer by situating information of multiple modalities with a program element. In particular, by improving navigating document tabs, which default configuration are especially spartan, often just showing the name of the document.

Episodic Memory

Episodic memory is the recollection of past events.
Software developers continually encounter new learning experiences about their craft. Retaining and making use of those such acquired knowledge requires that developers are able to recollect those experiences from their episodic memory. When recalling from episodic memory, developers commonly experience failures that limit their ability to recall essential details or recollect the key events. For example, a developer may forget the changes they performed for a programming task, or forget details such as a the blog post that was used for implementing a part of the task.
code narrative is an episodic memory aid that helps a developer recall contextual details and the history of programming activity. Two narrative structures are currently supported: A review mode for high-level recall of events and a share mode for publishing a coding task for others.
See a blog post shared and published semi-automatically via a code narrative.

Conceptual Memory

Conceptual memory is a continuum between perceptions and abstractions. How does the brain remember objects such as a hammer and concepts such as tool? The brain first learns basic features of encountered stimuli such as the wood grains and metal curves of a hammer and then organizes those features into progressively higher levels of abstraction.
Developers are expected to maintain expertise in their craft throughout their careers. Unfortunately, the path to becoming an expert is not easily walked: For a novice, evidence suggests this can be a 10 year journey (Chi:82). Furthermore, for experts trying to become experts in new domains, like the desktop developer becoming a web developer, there are many concepts that must be put aside and new ones learned.
Studies examining the difference between an expert and novice find that performance differences arise from differences in brain activity. Not only do experts require less brain activity than novices, they also use different parts of their brains (Milton:07): Experts use conceptual memory whereas novices use attentive memory. That is, experts are able to exploit abstractions in conceptual memory, whereas novices must hold primitive representations in attentive memory.
Sketchlets(alpha) helps a programmer form and prime concepts by supporting abstraction and reviewing concepts that need to be refreshed.


  • fMRI studies of programmers. See preliminary research!
  • Will future programmers take designer nootropics for boosting memory and attention to keep up?
  • Can we predict the memory load of using a language feature or performing a particular programming tasks?
  • What new tool ideas can be derived for programmers?
  • What experiments need to be run?
culled from:

Creatively Learning to Program


The programming website Project Euler provides a plan for how to learn anything in fun, discrete steps
When Colin Hughes was about eleven years old his parents brought home a rather strange toy. It wasn’t colorful or cartoonish; it didn’t seem to have any lasers or wheels or flashing lights; the box it came in was decorated, not with the bust of a supervillain or gleaming protagonist, but bulleted text and a picture of a QWERTY keyboard. It called itself the “ORIC-1 Micro Computer.” The package included two cassette tapes, a few cords and a 130-page programming manual.
On the whole it looked like a pretty crappy gift for a young boy. But his parents insisted he take it for a spin, not least because they had just bought the thing for more than £129. And so he did. And so, he says, “I was sucked into a hole from which I would never escape.”
It’s not hard to see why. Although this was 1983, and the ORIC-1 had about the same raw computing power as a modern alarm clock, there was something oddly compelling about it. When you turned it on all you saw was the word “Ready,” and beneath that, a blinking cursor. It was an open invitation: type something, see what happens.
In less than an hour, the ORIC-1 manual took you from printing the word “hello” to writing short programs in BASIC — the Beginner’s All-Purpose Symbolic Instruction Code — that played digital music and drew wildly interesting pictures on the screen. Just when you got the urge to try something more complicated, the manual showed you how.
In a way, the ORIC-1 was so mesmerizing because it stripped computing down to its most basic form: you typed some instructions; it did something cool. This was the computer’s essential magic laid bare. Somehow ten or twenty lines of code became shapes and sounds; somehow the machine breathed life into a block of text.
No wonder Colin got hooked. The ORIC-1 wasn’t really a toy, but a toy maker. All it asked for was a special kind of blueprint.
Once he learned the language, it wasn’t long before he was writing his own simple computer games, and, soon after, teaching himself trigonometry, calculus and Newtonian mechanics to make them better. He learned how to model gravity, friction and viscosity. He learned how to make intelligent enemies.
More than all that, though, he learned how to teach. Without quite knowing it, Colin had absorbed from his early days with the ORIC-1 and other such microcomputers a sense for how the right mix of accessibility and complexity, of constraints and open-endedness, could take a student from total ignorance to near mastery quicker than anyone — including his own teachers — thought possible.
It was a sense that would come in handy, years later, when he gave birth to Project Euler, a peculiar website that has trained tens of thousands of new programmers, and that is in its own modest way the emblem of a nascent revolution in education.
oric-1 screenshot.png
* * *
Sometime between middle and high school, in the early 2000s, I got a hankering to write code. It was very much a “monkey see, monkey do” sort of impulse. I had been watching a lot of TechTV — an obscure but much-loved cable channel focused on computing, gadgets, gaming and the Web — andHackers, the 1995 cult classic starring Angelina Jolie in which teenaged computer whizzes, accused of cybercrimes they didn’t commit, have to hack their way to the truth.
I wanted in. So I did what you might expect an over-enthusiastic suburban nitwit to do, and asked my mom to drive me to the mall to buy Ivor Horton’s 1,181-page, 4.6-pound Beginning Visual C++ 6. I imagined myself working montage-like through the book, smoothly accruing expertise one chapter at a time.
What happened instead is that I burned out after a week. The text itself was dense and unsmiling; the exercises were difficult. It was quite possibly the least fun I’ve ever had with a book, or, for that matter, with anything at all. I dropped it as quickly as I had picked it up.
Remarkably I went through this cycle several times: I saw people programming and thought it looked cool, resolved myself to learn, sought out a book and crashed the moment it got hard.
For a while I thought I didn’t have the right kind of brain for programming. Maybe I needed to be better at math. Maybe I needed to be smarter.
But it turns out that the people trying to teach me were just doing a bad job. Those books that dragged me through a series of structured principles were just bad books. I should have ignored them. I should have just played.

Nobody misses that fact more egregiously than the American College Board, the folks responsible for setting the AP Computer Science high school curriculum. The AP curriculum ought to be a model for how to teach people to program. Instead it’s an example of how something intrinsically amusing can be made into a lifeless slog.
ap curriculum outline.png
I imagine that the College Board approached the problem from the top down. I imagine a group of people sat in a room somewhere and asked themselves, “What should students know by the time they finish this course?”; listed some concepts, vocabulary terms, snippets of code and provisional test questions; arranged them into “modules,” swaths of exposition followed by exercises; then handed off the course, ready-made, to teachers who had no choice but to follow it to the letter.
Whatever the process, the product is a nightmare described eloquently by Paul Lockhart, a high school mathematics teacher, in his short booklet, A Mathematician’s Lament, about the sorry state of high school mathematics. His argument applies almost beat for beat to computer programming.
Lockhart illustrates our system’s sickness by imagining a fun problem, then showing how it might be gutted by educators trying to “cover” more “material.”
Take a look at this picture:
lockhart's triangle.png
It’s sort of neat to wonder, How much of the box does the triangle take up? Two-thirds, maybe? Take a moment and try to figure it out.
If you’re having trouble, it could be because you don’t have much training in real math, that is, in solving open-ended problems about simple shapes and objects. It’s hard work. But it’s also kind of fun — it requires patience, creativity, an insight here and there. It feels more like working on a puzzle than one of those tedious drills at the back of a textbook.
If you struggle for long enough you might strike upon the rather clever idea of chopping your rectangle into two pieces like so:
lockhart's triangle with vertical.png
Now you have two rectangles, each cut diagonally in half by a leg of the triangle. So there is exactly as much space inside the triangle as outside, which means the triangle must take up exactly half the box!
This is what a piece of mathematics looks and feels like. That little narrative is an example of the mathematician’s art: asking simple and elegant questions about our imaginary creations, and crafting satisfying and beautiful explanations. There is really nothing else quite like this realm of pure idea; it’s fascinating, it’s fun, and it’s free!
But this is not what math feels like in school. The creative process is inverted, vitiated:
This is why it is so heartbreaking to see what is being done to mathematics in school. This rich and fascinating adventure of the imagination has been reduced to a sterile set of “facts” to be memorized and procedures to be followed. In place of a simple and natural question about shapes, and a creative and rewarding process of invention and discovery, students are treated to this:
triangle area formula picture.png
“The area of a triangle is equal to one-half its base times its height.” Students are asked to memorize this formula and then “apply” it over and over in the “exercises.” Gone is the thrill, the joy, even the pain and frustration of the creative act. There is not even a problem anymore. The question has been asked and answered at the same time — there is nothing left for the student to do.
* * *
My struggle to become a hacker finally saw a breakthrough late in my freshman year of college, when I stumbled on a simple question:
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
This was the puzzle that turned me into a programmer. This was Project Euler problem #1, written in 2001 by a then much older Colin Hughes, that student of the ORIC-1 who had gone on to become a math teacher at a small British grammar school and, not long after, the unseen professor to tens of thousands of fledglings like myself.
The problem itself is a lot like Lockhart’s triangle question — simple enough to entice the freshest beginner, sufficiently complicated to require some thought.
What’s especially neat about it is that someone who has never programmed — someone who doesn’t even know what a program is — can learn to write code that solves this problem in less than three hours. I’ve seen it happen. All it takes is a little hunger. You just have to want the answer.
That’s the pedagogical ballgame: get your student to want to find something out. All that’s left after that is to make yourself available for hints and questions. “That student is taught the best who is told the least.”
It’s like sitting a kid down at the ORIC-1. Kids are naturally curious. They love blank slates: a sandbox, a bag of LEGOs. Once you show them a little of what the machine can do they’ll clamor for more. They’ll want to know how to make that circle a little smaller or how to make that song go a little faster. They’ll imagine a game in their head and then relentlessly fight to build it.
Along the way, of course, they’ll start to pick up all the concepts you wanted to teach them in the first place. And those concepts will stick because they learned them not in a vacuum, but in the service of a problem they were itching to solve.
Project Euler, named for the Swiss mathematician Leonhard Euler, is popular (more than 150,000 users have submitted 2,630,835 solutions) precisely because Colin Hughes — and later, a team of eight or nine hand-picked helpers — crafted problems that lots of people get the itch to solve. And it’s an effective teacher because those problems are arranged like the programs in the ORIC-1’s manual, in what Hughes calls an “inductive chain”:
The problems range in difficulty and for many the experience is inductive chain learning. That is, by solving one problem it will expose you to a new concept that allows you to undertake a previously inaccessible problem. So the determined participant will slowly but surely work his/her way through every problem.
This is an idea that’s long been familiar to video game designers, who know that players have the most fun when they’re pushed always to the edge of their ability. The trick is to craft a ladder of increasingly difficult levels, each one building on the last. New skills are introduced with an easier version of a challenge — a quick demonstration that’s hard to screw up — and certified with a harder version, the idea being to only let players move on when they’ve shown that they’re ready. The result is a gradual ratcheting up the learning curve.
Project Euler is engaging in part because it’s set up like a video game, with 340 fun, very carefully ordered problems. Each has its own page, like this one that asks you to discover the three most popular squares in a game of Monopoly played with 4-sided (instead of 6-sided) dice. At the bottom of the puzzle description is a box where you can enter your answer, usually just a whole number. The only “rule” is that the program you use to solve the problem should take no more than one minute of computer time to run.
On top of this there is one brilliant feature: once you get the right answer you’re given access to a forum where successful solvers share their approaches. It’s the ideal time to pick up new ideas — after you’ve wrapped your head around a problem enough to solve it.
This is also why a lot of experienced programmers use Project Euler to learn a new language. Each problem’s forum is a kind of Rosetta stone. For a single simple problem you might find annotated solutions in Python, C, Assembler, BASIC, Ruby, Java, J and FORTRAN.
Even if you’re not a programmer, it’s worth solving a Project Euler problem just to see what happens in these forums. What you’ll find there is something that educators, technologists and journalists have been talking about for decades. And for nine years it’s been quietly thriving on this site. It’s the global, distributed classroom, a nurturing community of self-motivated learners — old, young, from more than two hundred countries — all sharing in the pleasure of finding things out.
* * *
It’s tempting to generalize: If programming is best learned in this playful, bottom-up way, why not everything else? Could there be a Project Euler for English or Biology?
Maybe. But I think it helps to recognize that programming is actually a very unusual activity. Two features in particular stick out.
The first is that it’s naturally addictive. Computers are really fast; even in the ’80s they were really fast. What that means is there is almost no time between changing your program and seeing the results. That short feedback loop is mentally very powerful. Every few minutes you get a little payoff — perhaps a small hit of dopamine — as you hack and tweak, hack and tweak, and see that your program is a little bit better, a little bit closer to what you had in mind.
It’s important because learning is all about solving hard problems, and solving hard problems is all about not giving up. So a machine that triggers hours-long bouts of frantic obsessive excitement is a pretty nifty learning tool.
The second feature, by contrast, is something that at first glance looks totally immaterial. It’s the simple fact that code is text.
Let’s say that your sink is broken, maybe clogged, and you’re feeling bold — instead of calling a plumber you decide to fix it yourself. It would be nice if you could take a picture of your pipes, plug it into Google, and instantly find a page where five or six other people explained in detail how they dealt with the same problem. It would be especially nice if once you found a solution you liked, you could somehow immediately apply it to your sink.
Unfortunately that’s not going to happen. You can’t just copy and paste a Bob Villa video to fix your garage door.
But the really crazy thing is that this is what programmers do all day, and the reason they can do it is because code is text.
I think that goes a long way toward explaining why so many programmers are self-taught. Sharing solutions to programming problems is easy, perhaps easier than sharing solutions to anything else, because the medium of information exchange — text — is the medium of action. Code is its own description. There’s no translation involved in making it go.
Programmers take advantage of that fact every day. The Web is teeming with code because code is text and text is cheap, portable and searchable. Copying is encouraged, not frowned upon. The neophyte programmer never has to learn alone.
* * *
Garry Kasparov, a chess grandmaster who was famously bested by IBM’s Deep Blue supercomputer, notes how machines have changed the way the game is learned:
There have been many unintended consequences, both positive and negative, of the rapid proliferation of powerful chess software. Kids love computers and take to them naturally, so it’s no surprise that the same is true of the combination of chess and computers. With the introduction of super-powerful software it became possible for a youngster to have a top- level opponent at home instead of needing a professional trainer from an early age. Countries with little by way of chess tradition and few available coaches can now produce prodigies.
A student can now download a free program that plays better than any living human. He can use it as a sparring partner, a coach, an encyclopedia of important games and openings, or a highly technical analyst of individual positions. He can become an expert without ever leaving the house.
Take that thought to its logical end. Imagine a future in which the best way to learn how to do something — how to write prose, how to solve differential equations, how to fly a plane — is to download software, not unlike today’s chess engines, that takes you from zero to sixty by way of a delightfully addictive inductive chain.
If the idea sounds far-fetched, consider that I was taught to program by a program whose programmer, more than twenty-five years earlier, was taught to program by a program.
Image: Creative Commons.
culled from: