Monday, February 8, 2010

Brandenburger's Use of Game Theory

In their article “The Right Game: Use Game Theory to Shape Strategy”, Brandenburger and Nalebuff discuss how game theory works and how companies can use the principles to make decisions. The authors state that managers can use the principles to create new strategies for competing where the chances for success are much higher than they would be if they continued to compete under the same rules. A classic example used in the article is the case of General Motors. The automobile industry was facing many expenses due to the incentives that were being used at the retailers. General Motors responded by issuing a new credit card where the cardholders could apply a portion of their charges towards purchasing a GM car. GM even went so far as to allow cardholders to use a smaller portion of their charges towards purchasing a Ford car, allowing both companies to be able to raise their prices and increase long term profits. This action by GM created a new system where both GM and Ford could be better off, unlike the traditional competitive model where one company must profit at the expense of another. This is something that you will see with Matt Cutts when he talks about nuliksenor
The authors state that while the traditional win-lose strategy may sometimes be appropriate, but that the win-win system can be ideal in many circumstances. One advantage to win-win strategies is that since they have not been used much, they can yield many previously unidentified opportunities. Another major advantage is that since other companies have the opportunity to come out ahead as well, they are less likely to show resistance. The last advantage is that when other companies imitate the move the initial company benefits as well, in contrast to the initial company losing ground as they would in a win-lose situation.
The authors also state that there are five elements to competition that can be changed to provide a more optimal outcome. These elements are: the players (or companies competing), added values brought by each competitor, the rules under which competition takes place, the tactics used, and the scope or boundaries that are established. By understanding these factors, companies can apply different strategies to increase their own odds of success.
The first way that companies can increase their chances of success involves changing who the companies are that are involved in the business. One way that companies can improve their odds of success is by introducing new companies into the business. For example, both Coke and Pepsi wanted to get a contract to have Monsanto as a supplier. Since Monsanto had a monopoly at the time, they encouraged Holland Sweetener Company to compete with Monsanto. Since it seemed Monsanto no longer had a monopoly on the market, they were able to get more favorable contracts with Monsanto. Another way that companies can improve their chances is by helping other companies introduce more or better complimentary products.
Companies can also change the added values of themselves or their competitors. Obviously, companies can build a better brand or change their business practices so they operate more efficiently. However, the authors discuss how they can also lower the value of reducing the value of other companies as a viable strategy. Nintendo reduced the added value of retailers by not filling all of their orders, thus leaving a shortage and reducing the bargaining power of the stores buying its products. They also limited the number of licenses available to aspiring programmers, lowering their added value. They even lowered the value held by comic book characters when they developed characters of their own that became widely popular, presumably so that they wouldn’t have to pay as much to license these characters.
Changing the rules is another way in which companies can benefit. The authors introduce the idea of judo economics, where a large company may be willing to allow a smaller company to capture a small market share rather than compete by lowering its prices. As long as it does not become too powerful or greedy, a small company can often participate in the same market without having to compete with larger companies on unfavorable terms. Kiwi International Air Lines introduced services on its carriers that were of lower prices to get market share, but made sure that the competitors understood that they had no intention of capturing more than 10% of any market.
Companies can also change perceptions to make themselves better off. This can be accomplished either by making things clearer or more uncertain. In 1994, the New York Post attempted to make radical price changes in order to get the Daily News to raise its price to regain subscribers. However, the Daily News misunderstood and both newspapers were headed for a price war. The New York Post had to make its intentions clear, and both papers were able to raise their prices and not lose revenue. The authors also show an example of how investment banks can maintain ambiguity to benefit themselves. If the client is more optimistic than the investment bank, the bank can try to charge a higher commission as long as the client does not develop a more realistic appraisal of the company’s value.
Finally, companies can change the boundaries within which they compete. For example, when Sega was unable to gain market share from Nintendo’s 8-bit systems, it changed the game by introducing a new 16-bit system. It took Nintendo 2 years to respond with its own 16-bit system, which gave Sega the opportunity to capture market share and build a strong brand image. This example shows how companies can think outside the box to change the way competition takes place in their industry.
Brandenburger and Nalebuff have illustrated how companies that recognize they can change the rules of competition can vastly improve their odds of success, and sometimes respond in a way that benefits both themselves and the competition. If companies are able to develop a system where they can make both themselves and their competitors better off, then they do not have to worry so much about their competitors trying to counter their moves. Also, because companies can easily copy each other’s ideas, it is to a firm’s advantage if they can benefit when their competitors copy their idea, which is not usually possible under the traditional win-lose structure.
This article has some parallels with the article “Competing on Analytics” by (). The biggest factor that both of these articles have in common is how crucial it is for managers to understand everything they can about their business and the environment in which they work. In “Competing on Analytics”, the authors say that it is important to be familiar with this information so that managers can change the way they compete to improve their chances of success. At the end of “The Right Game: Use Game Theory to Shape Strategy”, the authors discuss how in order for companies to be able to change the environment or rules under which they compete they need to understand everything they can about the constructs under which they are competing. Whether a manager intends to use analytics or game theory to be successful, he or she must first have all available information and use that information to understand how to make the company better off. However, the work shown in “Competing on Analytics” tends to place an emphasis almost exclusively on the use of quantitative data to improve efficiency or market share of the company. “The Right Game”, however focuses more on using information to find creative ways of changing the constructs or rules applied between companies, often yielding a much broader impact.
Read more!

Tuesday, February 2, 2010

Excessive Planning Can Get in the Way of Good Decisions

In their article “Stop Making Plans; Start Making Decisions”, Mankins and Steele discuss the perils of making plans for their organizations. They discuss how the planning process can be time cumbersome and leave less time for implementing decisions. They also claim that the plans they make often become useless when they can finally be implemented. The authors say that organizations need to spend less time making plans, but also make sure the process is constantly updated so that their plans don’t become obsolete later on.
The article discusses how the CEO of ExCom developed a new planning process to improve the quality of the decisions and operations of the organization. This would require the executives meeting with the different heads of management and having long, intensive sessions with them. Unfortunately, this didn’t improve the outcomes of the organization at all, and other organization members did not feel that it worked either. Anonymous respondents said that the process was not only time consuming, it also failed to help managers make real decisions.
The authors say that many executives have lost confidence in the strategic planning process. This is because many of these organizations have invested many resources into developing them, only to find that the plans they made actually end up making decision making more difficult. There are two obstacles that keep strategic planning from working correctly. First of all, the planning is conducted on a periodic basis with the information that is available at the time. However, many changes take place between planning sessions, which makes the plans made obsolete. Secondly, the plans made are made for individual units, but may not necessarily contribute to the success of the organization as a whole. As a result, executives tend to make decisions that are not consistent with the planning criteria that they have established.
The largest problem with periodic planning is that it does not take into account the fact that decision making is an ongoing process. The decisions that managers have to make do not take consideration that the plans that were made earlier did not look at new information and changed variables. When a competitor introduces a new product or a new competitor enters the market, managers have to take that into consideration when they follow a set of plans that were made before this event took place. There are other, less obvious problems that arise as a result of planning. For example, planning at the functional unit level of the organization often causes the functional managers to become irritated with higher-level executives.
The point that the authors want to advocate is that managers must focus on strategic planning that has a direct impact on decision making. They discuss several approaches that successful executives have used to make their planning process more consistent with the decisions they are going to have to make. Most importantly, they keep planning and decision making as two separate processes but make sure to integrate the two. Secondly, they focus on a few key themes. They also make strategy development a continuous process. Finally, they structure strategy reviews to promote real decisions.
The authors say that effective managers must create a process where planning and decision making are done in parallel with each other. This is primarily done by determining the decisions that are going to have to be made, rather than what the final decisions are going to have to be. For example, Boeing sets up financial forecasts and reviews its business plan regularly in order to keep itself on track and be aware of the milestones that it is going to have to overcome. The organization has also developed a Strategy Integration Process to identify and address critical strategic issues that come to light.
Mankins and Steele also claim that successful managers choose a limited number of variables or themes to focus on, and try to make sure that they apply across numerous divisions within the company. This saves the time that it takes to cover all the issues that it would take to focus on a single function in its entirety. It also ensures that they are focusing on issues that are crucial to the organization as a whole. The authors discuss how Microsoft has seven business units, and how every strategy that needs to be implemented must cover at least two of these units.
In order for the strategic development process to be effective, it must be made into a continuous process. This makes is possible for the managers to address a single issue at a time and be ready to make a decision after the planning has been completed. This process also has the advantage of being able to be universally applied throughout the organization, rather than at the business unit level. Textron is a company that has picked up this new approach. Previously, the company had all of its business unit reviews over their second quarter, but now reviews a couple of its business units every quarter. This new planning process has helped Textron go from being an average performer to a superior performer.
Finally, the best planning processes are those that can be reviewed so that these reviews can be used to help make much better decisions. Textron’s initial planning meeting involves reviewing important facts such as profitability of different markets and the actions of consumers and competitors. Evaluating this information is seen as being essential to the later stages in the planning process.
The information in this article coincides with many of the points raised in Davenport’s “Paralysis by Analysis and Extinction by Instinct” article. Both of these articles stress how spending too much time thinking about information can be excessively time consuming and can lead to making decisions with data that is no longer valid. The article also shows similar information to the article “Decision Making: It’s Not What You Think.” Both of these articles discuss how thinking too much about a problem does not do any good unless the decision maker is willing to take action and make taking action a part of the decision making process.
Read more!

The Role of Risk in Decision Making

In his article “Decision-Making in the Presence of Risk”, Machina discusses the role that risk plays in making decisions, and what factors affect how risk is supposed to be managed. He begins by saying that if the probability of an event can be predicted, the expected results will be shown as the number of trials converges to infinity. However, if a single trial is run, risk plays a much larger role which will subsequently affect the way decisions are made. He uses the example of the game St. Petersburg Paradox in which the expected value of an outcome was infinity because of the possibly extremely high payoffs, but since the most likely payoffs were closer to a dollar this has a very strong effect on how much an individual would be willing to pay to participate in the game.
The article goes on to discuss how Gabriel Cramer and Daniel Bernoulli developed a utility function that is adjusted for risk. Using this function decision makers can determine whether it is a better deal to take a payoff with no risk attached or a larger payout with a certain degree of risk. Over the last two hundred years, there has been a large number of studies that have shown the validity of this theory if it used appropriately. However, evidence has also shown that people often do not use these models when making a decision and often ignore the results they get when actually making the decision.
The Expected Utility Model for most people can take four different forms: concave utility, convex utility, steep indifference and flat indifference curves. Since most people are reluctant to take risk, the required expected value is a function of risk, requiring a greater return for increased risk. In the concave function, the expected return begins to level off, meaning that the decision maker will not require a much higher rate of return for additional risk. On the other hand, a convex function reaches a point where a much higher rate of return is required for a small amount of risk. In both of these situations, the risk-return relationship varies depending on the amount of risk. In the indifference curves however, the change in expected return is constant, regardless of the amount of risk observed.
In order to determine how an individual feels about risk by experimentally determining what choices they would make in three different situations, each of which has a set of returns each with their respective probabilities. Evidence of using these tests has shown that the choices that decision makers often make decisions that do not follow the logic that they would otherwise make. A simpler approach of determining an individual’s risk aversion preferences to determine what definite return the individual would take instead of taking a risk that would involve a certain payoff or receive nothing at all. Interestingly, decision makers frequently depart from the expected value of the utility function in making this determination.
Researchers have arrived at several theories that explain why decision makers may make these seemingly irrational choices. First of all, they may simply not be very experienced at making decisions where the outcomes could not be easily predicted. Secondly, when someone pointed out to these individuals that their strategies for making decisions were not consistent with the expected outcomes of a utility function. Finally, the decisions made in an experiment may not reflect how decision makers would behave in a real world situation with real risks.
Since so many individuals do not follow the expected model approach, models have been created which take the unique preferences of the decision makers into account as well. Unfortunately, there are limitations to these models as well. One problem is that these models require a unique set of conditions since the incremental change in risk has a different effect on behavior and preferences at different stages. There are also still restrictions based on the variables that go into the function. Therefore, while the preference may better explain individuals decisions they are still not without their own limitations.
While it has been clear that there is not usually a linear relationship between risk and return of decision makers with different risk preferences, there are also other problems with the expected return hypothesis. One problem is that subjects in experiments often will change their preferences and decisions after being made to reconsider them. Another problem is that the way a problem is stated or presented also has a profound effect on how a decision maker will respond. Finally, if probabilities are not clear, it is difficult for decision makers to make a decision that is consistent with the expected utility function.
Read more!

Judgment Under Uncertainty

In their article “Judgment under Uncertainty: Heuristics and Biases”, Tversky and Kahneman discuss how subjective judgments are made to determine the likelihood of uncertain events prior to decision making. They discuss how decision makers try to assess the probability of an event, but how it is often difficult to do so accurately. Often, decision makers can estimate the general magnitude of an event (i.e. very likely or somewhat likely), but it is more difficult to make a more exact determination.
One of the problems that the authors address is the representative heuristic. In this situation, people might make judgments based on stereotypes without consideration to proportions. They use an example by suggesting that people might guess the likelihood of someone’s occupation by their personality traits without considering the actual number of people employed in that occupation. Assessing probabilities often does not include data on past probabilities of similar events. Experiments have shown that even when participants are told proportions ahead of time, they often ignore these proportions when guessing probabilities of events and base probabilities more on subjective and irrelevant data.
Another problem is that people often have erroneous expectations about probabilities. For example, if a coin is tossed six times, people have a tendency to believe that the sequence H-T-H-T-T-H is more common than the sequence H-H-H-H-T-H. This is because people have a tendency to think that probabilities of overall events also apply to probabilities of local sequences as well. This type of misconception has been shown by gamblers who will make decisions based on events that have taken place up to that point, rather than the probabilities of events occurring or recurring. Educated professionals have also been known to make similar mistakes in judgment.
Another problem is that people often try to make predictions when predictability is very low. For example, people often make decisions about the future profitability of companies based of favorable or unfavorable information. Unfortunately, this information often has little relevance to profitability. Therefore if there is no information that is directly linked to the profitability of the company, the predicted profitability should be considered to be entirely random. Similarly, decision makers often assume information to be more valid than it actually is. More astute individuals also make errors in judgment based on statistical misconceptions, such as normal distribution.
People also often assess likelihoods based on the availability of information. One way they do this is from their own experiences. Another problem is that people often use a means of finding information which is more convenient, even though it may be much less effective. Finally, they have a tendency to imagine or misconceive facts when there are not many that are readily available.
Decision makers often try to estimate an answer by starting at an initial value and then making some adjustments. Unfortunately, this process does not seem to work well because the adjustments are not appropriate. This is often because these adjustments are based on information that has been extrapolated, because the subjects believe that the same patterns will be observed throughout. Another problem witnessed is that subjects had a tendency to select impractical events when it came to disjunctive or conjunctive events as opposed to simple events. One finding claimed that when subjects had to choose between a simple event with a probability of 50% and a disjunctive event with a probability of 48% they would tend to select the less likely disjunctive event. Interestingly, when the subject had to choose between a simple event of 50% and a conjunctive event with a probability of 52% they would tend to choose the simple event. This study was consistent with a general finding that subjects tend to overestimate the likelihood of a conjunctive event and underestimate the likelihood of a disjunctive event.
Throughout the article, the authors discussed different biases that people have in making decisions. They also discussed how these biases were common among educated people as well as to laymen, although the mistakes made by professionals were not usually as elementary as those made by laymen. Educated decision makers tend to try to make decisions based on statistical data that is available to them. However, they often make improper assumptions and inferences from the data that is presented to them, which leads to many of the same misconceptions and errors in judgment.
Read more!

New Theories on How Decision Making Can be Improved

In their article “How Can Decision Making be Improved”, Milkman, Chugh and Bzerman discuss some of the ways in which decision makers are biased and how they can attempt to overcome these biases in order to make decisions more optimally. They discuss some of the reasons why effective decision making is important. They then go on to discuss how decision making can be improved.
The authors begin by addressing the fact that errors have many costs associated with them. The authors say that as we are becoming more industrialized and dependent on knowledge, the costs for making poor decisions are higher. Therefore, it is important to understand outcomes and how those outcomes can be improved. This means that decision makers need to know more about their strategies to make better decisions.
The authors say that academic research is expected to help with improving decision making. Professionals in different fields are conducting research to better understand how decisions are made. By understanding how human beings make decisions, they can help them establish how to make decisions better.
The authors discuss some of the theories that have been proposed to prevent against biased decision making. Earlier theories stressed the importance about warning against biases and understanding them, and supplying feedback and offering programs to educate against these biases. Unfortunately, research suggests that these approaches do not seem to be very effective. Newer research has focused more on cognitive processes.
The article stresses how people often lack important information about making decisions or the information they need to analyze in the decision. One theory is that people use two different types of thinking: System 1 and System2. System 1 thinking tends to be faster, more careless and more emotional. System 2 thinking on the other hand is slower, more logical and more careful. When people lack information or feel rushed to make a decision, decision makers are more likely to resort to System 1 thinking.
The authors claim that it is possible to move from System 1 to System 2 thinking. One of their suggestions is for decision makers to use formal analytical processes in exchange of intuition. If data is available that shows a link between two variables, decision makers can create a model or formula to help them make a more thought-out decision. Empirical evidence has shown that using this kind of model results in better decisions.
Another option to improve System 2 thinking is to try to view the situation from the perspective of an outsider. This approach has led to decision makers reducing their overconfidence about the amount of knowledge they had about the problem and being unrealistic about the amount of time it would take them to complete a project or how likely they were to be successful. Decision makers can also be encouraged to play “devil’s advocate” with themselves to reduce decision making biases such as overconfidence, hindsight bias and anchoring.
The authors also discuss a study conducted by Slovic and Fischoff to combat the effects of hindsight bias. Slovic and Fischoff believed that subjects experienced hindsight bias when they were not willing to draw on their knowledge of past situations and apply that knowledge to make a decision. Another group of researchers conducted research which concluded that decision makers need to consider the contributions of other people they are working with in order to overcome this decision making bias. A number of groups have also conducted research that suggests that decision making biases can be overcome by analogical thinking. Finally, decision making biases can be overcome by considering many options simultaneously rather than considering each of them separately.
The authors continue to emphasize through most of the article that System 2 thinking seems to lead to better decisions than System 1 thinking. However, the end of the article discusses how System 1 thinking can also be used in decision making. They describe how the unconscious mind can identify solutions that may be overlooked by the conscious. A new theory has been established which involves changing the environment in which System 1 thinking takes place to improve the decision making process. This strategy involves simulating the environment where the decision would take place so that the decision maker can get a better understanding of how they will be making the decision. This process can help decision makers be more honest about decision making biases they do not like to admit to.
Read more!

Sometimes Decision Making Requires Thinking in Reverse

In “Decision Making: Going Forward in Reverse”, Einhorn and Hogarth discuss how analyzing past information can be an important way of dealing with the future. The authors discuss how managers are constantly using both of these types of analysis, they do not understand the differences between the two. Since the two processes need to be handled differently, managers often make bad decisions when they don’t understand how they are supposed to handle them.
The authors claim that when thinking backwards decision makers must begin by trying to find a cause and effect relationship. They describe how most decision makers begin this investigation by trying to identify an unusual event or occurrence that may explain the current situation. They then go on to analyze their theories and determine whether or not they seem to adequately explain the situation or resolve the problem. However, there are a number of possible explanations that may explain the situation, so it is difficult to identify which, if any, of the explanations is appropriate. Therefore, they usually try to experiment to narrow the possible explanations and speculate as to which one is most appropriate.
The authors discuss how decision makers often look for links between causes and effects by looking for similarities between them. They use the example how in early medicine physicians believed that jaundice would be cured by a yellow remedy. Obviously, some of the associations that end up coming up end up not making sense. Therefore, people end up considering different levels of association between cause and effect based on cues. The four categories are: causes come before effects, causes and effects occur at approximately the same time, causes vary with effects, and causes may resemble effects. These cues may provide decision makers with a sense of which direction they should go in to investigate.
The authors suggest several different approaches that decision makers can use to think backwards more effectively. One approach is that they can use a number of different metaphors, to guard against the flaws associated with any single metaphor. Decision makers should also use more than one cue, and should also sometimes try to promote creative thinking by going against the cues. Decision makers should also consider how many connections and possible links there are between a cause and effect, and consider that the more links between the two the weaker the chain may be. Finally, decision makers should consider different explanations. These theories should be tested experimentally when possible. However, when experimentation is not possible, decision makers can imagine the situations and circumstances involved to get a better idea of what might happen and how the cause and effect might work.
The authors spend the second half of the article talking about how decision makers think forward. They discuss how most people have a hard time doing so accurately. Interestingly, people tend to have more faith in human judgment than in statistical models, despite the fact that statistical models tend to be much more accurate. Nonetheless, there are a number of reasons why humans tend to not have faith in statistical models.
The primary reason that humans tend to be skeptical of models is that they cannot adequately understand all the variables and the relationships between them. The errors produced by models tend to show up consistently, as opposed to the errors produced by humans which vary. As a result, the inaccuracies in models tend to be more visible and stand out better in the minds of humans. Humans often try to extrapolate their own patterns which they believe can be more accurate than the models they would alternatively use.
Decision makers also are skeptical of models because they believe that they these models are static. In order to fight this bias, models need to be updated and improved based on new information and links that have been learned. One problem that the authors suggest is that it is important to separate accurate predictions from the effects caused by those predictions. They illustrate through the example of the president of the United States making a statement about the economy going through a recession. If a recession does result, it is important to understand if the president actually predicted the recession correctly, or if his statement itself caused the recession.
The final reason why humans tend to avoid using models is that models are often thought to be more costly than they are of value. The authors argue that even though it is difficult to measure the cost-benefit tradeoff of using a model, the models usually will eventually be worth the cost if they are used enough. They illustrate this by showing how AT&T used models to reduce bad debt which was costing them over $100 million a year.
This article was similar to “Competing on Analytics”, (Davenport et. al,2005) because both articles discussed how statistical models can be an enormous value to decision makers. Both articles stress the limitations of human judgment, and how we must consider models that may be capable of producing superior results. Similarly, the article “Automated Decision Making Comes of Age” (Davenport, 2005) discusses the use of computer operated decision making and how it can yield many benefits that cannot be realized from human judgment. All three of these articles stress how even though carefully constructed models can yield better results than human beings, they are still not widely accepted and are met with skepticism by decision makers in the real world.
Read more!

Making Important Decisions Under Ambiguity

In their article “Robust Decision-making Under Ambiguity”, Erat and Kavadias discuss how decision makers face ambiguity and deal with it when facing decisions. They state that ambiguity is different than risk in that ambiguity does not come with estimated probabilities from knowledge and previous experience. Risk is difficult to understand and deal with in and of itself, but when risk cannot be quantified it makes managers’ jobs even harder to carry out. The authors discuss portfolio theory, and how experts often cannot even agree on returns of investments, much less the probabilities of a given return being realized. Even with available past data, it is impossible to derive an accurate and reliable distribution. At best, they can only develop a confidence interval.
The authors begin by citing work done by Knight. Knight claimed that decision-making falls into three different types of environments. The known environment is when decision makers understand the state of the world to a meaningful degree. The uncertain environment is where the decision maker does not fully understand the state of the world, but understands the probabilities that they are facing. Finally, the ambiguous environment is where the decision maker is not aware of the likeliness of any state.
In the first case, maximizing the outcome is most straightforward. In the second case, the decision maker must try to maximize the outcome with the probabilities established from previous experiences and events. This is done by summing the probabilities multiplied by the expected outcomes associated with each given probability. The third case is very difficult to assess. In this case, the decision maker should maximize the worst case scenario, so that they can be certain that their outcome will be at least a certain minimum value. The authors discuss how it is possible to express the robust form in the ambiguous environment and the uncertain environment in a way that makes them equal. This results in a state where the ambiguous result can be indistinguishable from an uncertain result.
The authors discuss a real world situation in which ambiguity plays a key role. Customers in the real world desire a certain minimum level of performance and are willing to pay a maximum price. The authors say that for most market needs applications such as this, it is usually assumed that the customer preferences show a normal distribution with a standard deviation. Also, the cost of developing the product increases with the increased performance. Therefore, in order to maximize the number of customers that will purchase the product, the firm has to pay higher development costs. Unfortunately, firms do not know the exact characteristics of their target customers. Since the firm must maximize profits by matching performance with customer expectations, it is difficult to know how to optimize their return. Their best bet is to develop a function for profit based on performance and try to make a good estimate on the performance requirements of the customers.
They illustrate this situation with a sophisticated equation:
Π(P, µ) = ∫ (1/√(2πσ2)е((θ-µ)2/(2σ2)M(θ)dθ – C(P)
The limits shown on the integral on pg. 6 go from the minimum required performance specifications (T) to the number of customers in the market.
This equation basically says that the profit is relatively a function of the price the customer segment is willing to pay minus the cost that it takes to produce the product. This means that summing up the contribution from the entire market will indicate the overall profit from the product.
The authors conclude this example by emphasizing the rest of the concepts that they had already addressed. They said that like any other situation, the managers of the firm should try to address their risks by maximizing their worst case scenerios. In this situation, there is a different worst case scenario for every action that is taken. Therefore, they would need to first identify their action, and then identify their worst case scenario from there.
The authors did a good job explaining the differences between uncertain and ambiguous environments. The message they seemed to be trying to give in the article was that in an ambiguous situation decision makers really cannot maximize their expected outcomes because it is impossible to predict or even guess what those outcomes will probably be. Therefore, the authors advocate maximizing the worst case scenario so that they can be sure to minimize their losses. Unfortunately, they didn’t provide many concrete examples to help illustrate these concepts, and they seemed to be presented in a theoretical way. Also, the model they introduced was very sophisticated and confusing. However, the basic concepts were clear and understandable.
Read more!

Decision Making: It's Not What You Think

In their article “Decision Making: It’s Not What You Think” Mintzberg and Westley discuss the most logical procedure for making decisions and how many individuals do not follow it. They discuss how many individuals do not follow a rational approach to making decisions even though they believe that they should. The three types of approaches to decision making as described in the article are “thinking first,” “seeing first,” and “doing first.” The authors suggest that although the “thinking first” model seems to be ideal to most people, people do not seem to actually use it most of the time when making real world decisions.
The authors discuss how the formal decision making process is seemingly straightforward and easy to implement. However, in reality implementing it is more complicated. New information often arises, and new perceptions are often formed. This forces decision makers to reevaluate their decisions. The process can become very time consuming, and sometimes urgency requires the decision makers to break the process and make a decision, even if the decision is not the optimal one. The authors give an example of chess master Alexander Kotov, who describes how he often tries to plan his moves carefully, but after being concerned about running out of time he ends up making more impulsive decisions.
The article provides two theories as to why the formal decision making process does not seem to be commonly used in real world situations. The author references research conducted by A. Langley and her colleagues that describe how decision makers often begin theorizing about possible solutions before someone in the group says something that makes the answer obvious. At this point, they simply make the first decision that seems to make the most sense. Another theory is that decision making can be considered “organized anarchy.” This theory is based on research conducted by James March and some of his colleagues.
The article discusses why the “seeing first” approach tends to have more credibility. The authors discuss how individuals often make observations which inspire thought or make the thinking process more productive. An example used in the article is that the biologist Alexander Fleming observed that mold in his laboratory was killing bacteria, which gave him the idea to develop penicillin. Also, in the case of Archimedes, an observation is often an important event that must take place in order for a problem to be solved. The authors go on to say that for this process to work they need to be able to see what others cannot.
The “doing first” approach is sometimes necessary when there is not enough information to make a competent decision. Sometimes their only option is to experiment their available options to determine which of them works the best. The authors discuss how doing can stimulate thinking just as easily as thinking can lead to doing. Doing is essential to the learning process, which has been acknowledged by the most successful companies.
The authors describe several experiments that they have conducted to determine which approaches people tend to use, and how the approaches tend to be structured. According to their research, the thinking first approach seems to have complications and limitations that we would not initially expect. One problem is that the quality and thoroughness of analysis come at the expense of efficiency. Also, most decision makers don’t seem to use the discipline necessary to make detailed decisions and their decisions tend to be too theoretical because they do not acknowledge many real world limitations that would keep their decision from working effectively.
The experiment suggested that “seeing first” approach had several advantages over the thinking first approach. Many subjects in the experiment said that they felt more involved in the decision making when they could see the actual circumstances they were dealing with rather than thinking about it in a hypothetical sense. Evidence also suggests that the “seeing first” experiments tend to involve more creativity than the “thinking first” experiments. These factors seem to make the “seeing first” process more memorable in the minds of the participants. The “doing first” approach also had some unique advantages. The primary advantage was that participants seemed to fight many of the effects of groupthink as they seem to feel more comfortable stating their opinions and concerns. It also helps them alleviate some of the pressures they have in making hasty decisions since they do not waste time doing unnecessary analysis.
The authors of the article emphasize that while thinking is certainly important as a part of the decision-making process, it is not as useful when it is a process that is isolated from seeing and doing. The “thinking first” approach may work well when the problem is well understood. However, the “seeing first” approach may be needed when the problem is relatively complex. Finally, the “doing first” approach may be the only option when decision makers do not initially have enough information to make a decision. Therefore, the traditional model for solving problems in a way that emphasizes cognitive processes seems to be overused. Instead, decision makers should try to make decisions in a way that combines thinking, seeing and doing.
Read more!

Competing on Analytics

The article “Competing on Analytics” (Davenport, 2005) discusses how organizations are relying more on quantitative models than in previous years. Companies within the financial services have used analytics for years. However, companies in other industries have recently begun making extensive use of quantitative models in less conventional ways.
While all companies use quantitative models to some extent, some rely on it much more heavily than others. Davenport claims that there are several important attributes that can be used to identify which companies use analytics as a central part of their strategy. One of the most important elements of a company with a strong analytics program is that one or more of the highest-level managers must try to promote analytics within the organization. This is because creating an analytics based organization requires a strong shift in organizational culture and practice. Some lower-level employees have helped successfully make such a change in their companies, but such a change usually needs to come from a manager with more power and influence.
Companies with strong analytics programs also use sophisticated techniques for analyzing and predicting different variables. These companies maximize effectiveness in areas such as customer retention, pricing and management of inventory through the use of analytics. In addition to conducting thorough analyses, these companies also conduct experiments to better understand the models that they will use in the future. Analytics are used within many different functions within the organization, and become a standard.
Davenport and his colleagues interviewed many organizations that had reputations for being highly analytical, and identified several stages that these organizations went through. Initially, organizations begin as being interested in being more analytical, but do not have the resources or skills to do so. They usually begin by applying analytics in a specific, narrowly defined area. Over time, they expand their use of analytics before they are finally able to develop a program that gives them an advantage over their competitors.
In addition to identifying the factors that are common to organizations with strong analytics programs and the stages they must go through to develop them, Davenport identified the different areas where analytics could be helpful to the organization. Davenport found that analytics were useful in identifying and retaining key customers, managing supply chains, developing new products and minimizing costs. They also were commonly used for overseeing the overall strategy of the organizations.
Finally, Davenport discussed what firms needed for successful analytics programs. His group claimed that the most important factor was access to large amounts of high-quality data. Firms also needed to be capable of processing and interpreting this data, which requires a strong technological environment and employees with strong quantitative competencies. If there are no employees within the firm that are capable of interpreting the data, the company has the option of outsourcing these functions to other organizations.
In addition to needing plenty of data and the capability to interpret it, organizations need to make the need for this data clear so that members of the organization can understand its importance. If top executives are not willing to accept the use of analytics, the organization is unlikely to develop a strong analytics program regardless of the amount or quality of the data available. If current leaders do not advocate the use of analytics, a change in management can encourage their use. The executives that successfully implemented the use of analytics was not always the CEO, although the CEO tended to be most successful.
Organizations have done different things to create a demand for analytics among their members. Many organizations have appointed specific individuals or even assigned entire teams to be responsible for handling the analytics of different functional divisions within the organization. Another way for lower level to promote the use of analytics is to provide busy and impatient executives with readily available data so they don’t have to use an analytical approach at the expense of saving time. Changing an organization to make it more analytically based always involves persuading higher-level managers and other relevant stakeholders that using analytics is realistic and would improve the organization.
Many companies are beginning to use analytics to become more competitive. Davenport seems to be making it clear that analytics can be a strong tool for improving companies’ performance in a number of different areas. However, there are a number of different factors that can limit companies’ ability to use analytics and managers’ ability to convince executives and other individuals to make analytics a central role of the organizations.
Read more!

Automated Decision Making

In his article "Automated Decision Making", Davenport discusses the historical reluctance to adopt automated management decision making system in organizations. In the past, key decision makers did not feel comfortable letting computers make complex decisions for the organization. Their skepticism was mostly because of the complexity of decision making and the fact that real world decisions could not be reduced to a few simple variables. Another reason why managers resisted making these systems a part of their organizations was that they would be too complex for most users to understand. Even systems that were developed to support managers in the decision-making process were difficult to use because of the quantitative skills they required.
However, these systems are now becoming commonplace within many organizations for two reasons. First, advances in information technology have made it possible for these systems to solve more complex business problems. Secondly, business decisions have become increasingly more complex, to the point that many of them cannot be adequately analyzed by human beings. These new systems do not require much human involvement, which means that organizations do not have to assign specialists to manage them.
Automated decision-making systems are still only useful for solving certain types of problems. They are only practical for applications where there is a large supply of electronic information. These systems also can only solve problems that are well understood and the methodology for handling them is clear. Also, the standards of defining optimal decisions cannot be too subjective. Automated decision-making systems are also most appropriate for situations where it is important to make a decision quickly.
On the other hand, automated decision-making systems can be very useful in solving problems where there is a need for consistency. The likelihood that they will make a mistake is much lower than that of a human being solving the same problem. Human beings are also more prone to look at each problem on a case by case basis and use their own judgment when solving problems. This makes automated systems very useful when there is a need for consistency. They can also pick up on sensitive data and sense changes that would be easily overlooked by human employees.
Even though automated systems still require access to significant amounts of electronic information, it is being increasingly used for applications that are not highly quantitative. However, even though the application does not seem to be highly quantitative, it still relies on quantitative data. For example, Davenport discusses how a major winery uses sensors to monitor temperature and other weather conditions and passes this information to automated systems. These systems are then able to make certain decisions such as how much water to provide when irrigating the grapes in the vineyard. However, the more complex decisions still must be made by human operators.
Despite the fact that automated systems have provided accuracy, consistency and timely decisions for managers, they have also introduced other problems for organizations. The largest problem that executives face is that they must make sure that the standards, limits and variables for the systems that they are using. If they are not well understood, the system will still deliver a solution based on the information it was provided, but most likely it will not be the solution that executives are looking for. Davenport discussed how Cisco Systems made poor assumptions with the automated inventory system it was using to manage inventory, which ended up costing the company over $2 billion. In this situation, not only did the company poorly define the information that would be used in the system, it did not monitor the system sufficiently.
Managers must also recognize the importance for recognizing the need for exceptions. Since automated systems must operate consistently, humans must be able to oversee the systems and identify situations where the decision determined by the automated system must be overruled. If computers do not have enough data to make a reliable decision, they must be programmed to inform a human to handle the decision. Davenport states that some organizations, such as hospitals punish their employees for not overruling these systems. He argues that managers need to understand why these decisions were made. Another problem with these systems is the need to find experts who are capable of setting up and managing these systems. Davenport mentions an insurance company that had to discontinue using its system because it did not have anyone to manage it.
Legal and political issues can also affect how automated systems will be used in following years. Davenport mentions a hospital that was sued over a decision made by an automated system, and how future lawsuits may discourage companies from using these systems. Laws are also being made to not only regulate the use of automated systems but also the access that companies may have to personal information and how this information may be used by companies that make decisions that may affect the individuals.
Davenport’s main point in this article is that automated systems are becoming more widely used, and have some distinct advantages over using human beings to make certain decisions. However, he stresses that there are limitations in these systems, and has shown many companies have made tragic mistakes by not overlooking these limitations and hiring employees to monitor them appropriately. While there are many possible factors that can contribute to these problems, the biggest reason that automated systems often provide poor decisions is that the variables and situations which they are employed are not properly defined or set up. Therefore, companies must carefully oversee the automated systems that they are employing and make sure that the problems they solve are well understood.
Read more!