2009年2月11日 星期三

Competing on experimentation

這些文章和書
說明企業"自古成功在科學地試驗"




Spotlight

Buy Poster at AllPosters.com
Thomas Edison
in His Lab
View Poster
Where would our spotlight be without Thomas Edison? Edison was a man who didn't believe in giving up; he said, "I have not failed. I've just found 10,000 ways that won't work." Born on this date in 1847, the "Wizard of Menlo Park" received over 1,000 patents for things we consider indispensable today, such as the incandescent light bulb, the phonograph and the stock ticker. He created the first motion picture camera and the first copyrighted film, Fred Ott's Sneeze. Edison was hearing impaired from when he was young, and had only three months of formal schooling. But, luckily for us, he loved to tinker.



How to Design Smart Business Experiments


Managers regularly implement new ideas without evidence to back them up. They act on hunches and often learn very little along the way. That doesn’t have to be the case. With the help of broadly available software and some basic investments in building capabilities, managers don’t need a PhD in statistics to base consequential decisions on scientifically sound experiments.

Some companies with rich consumer-transaction data—Toronto-Dominion, CKE Restaurants, eBay, and others—are routinely testing innovations well outside the realm of product R&D. As randomized testing becomes standard procedure in certain settings (website analysis, for instance), firms learn to apply it in other areas as well. Entire organizations that adopt a “test and learn” culture stand to realize the greatest benefits.

That said, firms need to determine when formal testing makes sense. Generally, it’s much more applicable to tactical decisions (such as choosing a new store format) than to strategic ones (such as figuring out whether to acquire a business). Tests are useful only if managers define and measure desired outcomes and formulate logical hypotheses about how proposed interventions will play out.

To begin incorporating more scientific management into your business, acquaint managers at all levels with your organization’s testing process. A shared understanding of what constitutes a valid test—and how it jibes with other processes—helps executives to set expectations and innovators to deliver on them. The process always begins with creating a testable hypothesis. Then the details of the test are designed, which means identifying sites or units to be tested, selecting control groups, and defining test and control situations. After the test is carried out for a specified period, managers analyze the data to determine results and appropriate actions. Results ideally go into a “learning library,” so others can benefit from them.

*****





Managers now have the tools to conduct small-scale tests and gain real insight. But too many “experiments” don’t prove much of anything.

Every day, managers in your organization take steps to implement new ideas without having any real evidence to back them up. They fiddle with offerings, try out distribution approaches, and alter how work gets done, usually acting on little more than gut feel or seeming common sense—“I’ll bet this” or “I think that.” Even more disturbing, some wrap their decisions in the language of science, creating an illusion of evidence. Their so-called experiments aren’t worthy of the name, because they lack investigative rigor. It’s likely that the resulting guesses will be wrong and, worst of all, that very little will have been learned in the process.

Sidebar Icon Idea in Brief

Take the example of a major retail bank that set the goal of improving customer service. It embarked on a program hailed as scientific: Some branches were labeled “laboratories”; the new approaches being tried were known as “experiments.” Unfortunately, however, the methodology wasn’t as rigorous as the rhetoric implied. Eager to try out a variety of ideas, the bank changed many things at once in its “labs,” making it difficult if not impossible to determine what was really driving any improved results. Branches undergoing interventions weren’t matched to control sites for the most part, so no one could say for sure that the outcomes noted wouldn’t have happened anyway. Anxious to head off criticism, managers did provide a control in one test, which was designed to see if placing video screens showing television news over waiting lines would shorten customers’ perceived waiting time. But rather than looking at control and test groups, they compared just one control site with one test site. That wasn’t enough to ensure statistically valid results. Perceived waiting time did drop in the test branch, but it went up substantially in the control branch, despite no changes there. Those confounding data kept the test from being at all conclusive—but that’s not how the findings were presented to top management.

It doesn’t have to be this way. Thanks to new, broadly available software and given some straightforward investments to build capabilities, managers can now base consequential decisions on scientifically valid experiments. Of course, the scientific method is not new, nor is its application in business. The R&D centers of firms ranging from biscuit bakers to drug makers have always relied on it, as have direct-mail marketers tracking response rates to different permutations of their pitches. To apply it outside such settings, however, has until recently been a major undertaking. Any foray into the randomized testing of management ideas—that is, the random assignment of subjects to test and control groups—meant employing or engaging a PhD in statistics or perhaps a “design of experiments” expert (sometimes seen in advanced TQM programs). Now, a quantitatively trained MBA can oversee the process, assisted by software that will help determine what kind of samples are necessary, which sites to use for testing and controls, and whether any changes resulting from experiments are statistically significant.

Sidebar Icon Idea in Practice

Consumer-facing companies rich in transaction data are already routinely testing innovations well outside the realm of product R&D. They include banks such as PNC, Toronto-Dominion, and Wells Fargo; retailers such as CKE Restaurants, Famous Footwear, Food Lion, Sears, and Subway; and online firms such as Amazon, eBay, and Google. As randomized testing becomes standard procedure in certain settings—website analysis, for instance—firms build the capabilities to apply it in other circumstances as well. (See the sidebar “Stop Wondering” for a sampling of tests conducted recently.) To be sure, there remain many business situations where it is not easy or practical to structure a scientifically valid experiment. But while the “test and learn” approach might not always be appropriate (no management method is), it will doubtless gain ground over time. Will it do so in your organization? If it’s like many companies I have studied, an investment in software and training will yield quick returns of the low-hanging-fruit variety. The real payoff, however, will happen when the organization as a whole shifts to a test-and-learn mind-set.

Sidebar Icon Stop Wondering

When Testing Makes Sense

Formalized testing can provide a level of understanding about what really works that puts more intuitive approaches to shame. In theory, it makes sense for any part of the business in which variation can lead to differential results. In practice, however, there are times when a test is impossible or unnecessary. Some new offerings simply can’t be tested on a small scale. When Best Buy, for example, explored partnering with Paul McCartney on an exclusively marketed CD and a sponsored concert tour, neither component of the promotion could be tested on a small scale, so the company’s managers went with their intuition. At Toronto-Dominion, one of the largest and most profitable banks in Canada, testing is so well established that occasionally managers are reminded that, in the interests of speed, they can make the call without a test when they have a great deal of experience in the relevant business domain.

Generally speaking, the triumphs of testing occur in strategy execution, not strategy formulation. Whether in marketing, store or branch location analysis, or website design, the most reliable insights relate to the potential impact and value of tactical changes: a new store format, for example, or marketing promotion or service process. Scientific method is not well suited to assessing a major change in business models, a large merger or acquisition, or some other game-changing decision.

Capital One’s experience hints at the natural limits of experimental testing in a business. The company has been one of the world’s most aggressive testers since 1988, when its CEO and cofounder, Rich Fairbank, joined its predecessor firm, Signet Bank. You could even say the firm was founded on the concept. One thing that appealed to Fairbank about the credit card industry was its “ability to turn a business into a scientific laboratory where every decision about product design, marketing, channels of communication, credit lines, customer selection, collection policies and cross-selling decisions could be subjected to systematic testing using thousands of experiments.”1 Capital One adopted what Fairbank calls an information-based strategy, and it paid off: The company became the fifth-largest provider of credit cards in the United States.

Yet when it came time to make the largest decision the company had faced in recent years, Capital One’s management concluded that testing would not be useful. Realizing that the business would need other sources of capital to remain independent, the team considered acquiring some regional banks in order to transform itself from a monoline credit provider into a full-service bank. The decision was not tested for a couple of important reasons. First, the nature of the opportunity made it imperative to move quickly; no time was available for even a small-scale test. Second, and more critical, it was impossible to design an experiment that could reliably predict the outcomes of such a major change in business direction. Still, after making the acquisitions, Capital One reaffirmed its commitment to information-based strategy. Its managers immediately set about translating that ethos into the full-service banking context, which required pushing the method further, into tests involving customer service and employee behavior. As one employee told me, “It’s much easier to do randomized testing with direct-mail envelopes than with branch bankers.”

Copyright © 2009 Harvard Business School Publishing Corporation. All rights reserved.

*****

Reverse Engineering Google's Innovation Machine

Bala Iyer, Thomas H. Davenport

  • Length: 13p
  • Publication Date: April, 2008

Every piece of the business plays a part, every part is indispensable, every failure breeds success, and every success demands improvement.

*****


同名之書由台灣的CPC在 2009 年翻譯出版(平) 無參考文獻索引 定價為原書(精)三分之一


Competing on Analytics

Some companies have built their very businesses on their ability to collect, analyze, and act on data. Every company can learn from what these firms do.

We all know the power of the killer app. Over the years, groundbreaking systems from companies such as American Airlines (electronic reservations), Otis Elevator (predictive maintenance), and American Hospital Supply (online ordering) have dramatically boosted their creators’ revenues and reputations. These heralded—and coveted—applications amassed and applied data in ways that upended customer expectations and optimized operations to unprecedented degrees. They transformed technology from a supporting tool into a strategic weapon.

Companies questing for killer apps generally focus all their firepower on the one area that promises to create the greatest competitive advantage. But a new breed of company is upping the stakes. Organizations such as Amazon, Harrah’s, Capital One, and the Boston Red Sox have dominated their fields by deploying industrial-strength analytics across a wide variety of activities. In essence, they are transforming their organizations into armies of killer apps and crunching their way to victory.

Organizations are competing on analytics not just because they can—business today is awash in data and data crunchers—but also because they should. At a time when firms in many industries offer similar products and use comparable technologies, business processes are among the last remaining points of differentiation. And analytics competitors wring every last drop of value from those processes. So, like other companies, they know what products their customers want, but they also know what prices those customers will pay, how many items each will buy in a lifetime, and what triggers will make people buy more. Like other companies, they know compensation costs and turnover rates, but they can also calculate how much personnel contribute to or detract from the bottom line and how salary levels relate to individuals’ performance. Like other companies, they know when inventories are running low, but they can also predict problems with demand and supply chains, to achieve low rates of inventory and high rates of perfect orders.

And analytics competitors do all those things in a coordinated way, as part of an overarching strategy championed by top leadership and pushed down to decision makers at every level. Employees hired for their expertise with numbers or trained to recognize their importance are armed with the best evidence and the best quantitative tools. As a result, they make the best decisions: big and small, every day, over and over and over.

Although numerous organizations are embracing analytics, only a handful have achieved this level of proficiency. But analytics competitors are the leaders in their varied fields—consumer products, finance, retail, and travel and entertainment among them. Analytics has been instrumental to Capital One, which has exceeded 20% growth in earnings per share every year since it became a public company. It has allowed Amazon to dominate online retailing and turn a profit despite enormous investments in growth and infrastructure. In sports, the real secret weapon isn’t steroids, but stats, as dramatic victories by the Boston Red Sox, the New England Patriots, and the Oakland A’s attest.

At such organizations, virtuosity with data is often part of the brand. Progressive makes advertising hay from its detailed parsing of individual insurance rates. Amazon customers can watch the company learning about them as its service grows more targeted with frequent purchases. Thanks to Michael Lewis’s best-selling book Moneyball, which demonstrated the power of statistics in professional baseball, the Oakland A’s are almost as famous for their geeky number crunching as they are for their athletic prowess.

To identify characteristics shared by analytics competitors, I and two of my colleagues at Babson College’s Working Knowledge Research Center studied 32 organizations that have made a commitment to quantitative, fact-based analysis. Eleven of those organizations we classified as full-bore analytics competitors, meaning top management had announced that analytics was key to their strategies; they had multiple initiatives under way involving complex data and statistical analysis, and they managed analytical activity at the enterprise (not departmental) level.

This article lays out the characteristics and practices of these statistical masters and describes some of the very substantial changes other companies must undergo in order to compete on quantitative turf. As one would expect, the transformation requires a significant investment in technology, the accumulation of massive stores of data, and the formulation of companywide strategies for managing the data. But at least as important, it requires executives’ vocal, unswerving commitment and willingness to change the way employees think, work, and are treated. As Gary Loveman, CEO of analytics competitor Harrah’s, frequently puts it, “Do we think this is true? Or do we know?”

Copyright © 2005 Harvard Business School Publishing Corporation. All rights reserved.

沒有留言:

網誌存檔