In their article “The Right Game: Use Game Theory to Shape Strategy”, Brandenburger and Nalebuff discuss how game theory works and how companies can use the principles to make decisions. The authors state that managers can use the principles to create new strategies for competing where the chances for success are much higher than they would be if they continued to compete under the same rules. A classic example used in the article is the case of General Motors. The automobile industry was facing many expenses due to the incentives that were being used at the retailers. General Motors responded by issuing a new credit card where the cardholders could apply a portion of their charges towards purchasing a GM car. GM even went so far as to allow cardholders to use a smaller portion of their charges towards purchasing a Ford car, allowing both companies to be able to raise their prices and increase long term profits. This action by GM created a new system where both GM and Ford could be better off, unlike the traditional competitive model where one company must profit at the expense of another. This is something that you will see with Matt Cutts when he talks about nuliksenor
The authors state that while the traditional win-lose strategy may sometimes be appropriate, but that the win-win system can be ideal in many circumstances. One advantage to win-win strategies is that since they have not been used much, they can yield many previously unidentified opportunities. Another major advantage is that since other companies have the opportunity to come out ahead as well, they are less likely to show resistance. The last advantage is that when other companies imitate the move the initial company benefits as well, in contrast to the initial company losing ground as they would in a win-lose situation.
The authors also state that there are five elements to competition that can be changed to provide a more optimal outcome. These elements are: the players (or companies competing), added values brought by each competitor, the rules under which competition takes place, the tactics used, and the scope or boundaries that are established. By understanding these factors, companies can apply different strategies to increase their own odds of success.
The first way that companies can increase their chances of success involves changing who the companies are that are involved in the business. One way that companies can improve their odds of success is by introducing new companies into the business. For example, both Coke and Pepsi wanted to get a contract to have Monsanto as a supplier. Since Monsanto had a monopoly at the time, they encouraged Holland Sweetener Company to compete with Monsanto. Since it seemed Monsanto no longer had a monopoly on the market, they were able to get more favorable contracts with Monsanto. Another way that companies can improve their chances is by helping other companies introduce more or better complimentary products.
Companies can also change the added values of themselves or their competitors. Obviously, companies can build a better brand or change their business practices so they operate more efficiently. However, the authors discuss how they can also lower the value of reducing the value of other companies as a viable strategy. Nintendo reduced the added value of retailers by not filling all of their orders, thus leaving a shortage and reducing the bargaining power of the stores buying its products. They also limited the number of licenses available to aspiring programmers, lowering their added value. They even lowered the value held by comic book characters when they developed characters of their own that became widely popular, presumably so that they wouldn’t have to pay as much to license these characters.
Changing the rules is another way in which companies can benefit. The authors introduce the idea of judo economics, where a large company may be willing to allow a smaller company to capture a small market share rather than compete by lowering its prices. As long as it does not become too powerful or greedy, a small company can often participate in the same market without having to compete with larger companies on unfavorable terms. Kiwi International Air Lines introduced services on its carriers that were of lower prices to get market share, but made sure that the competitors understood that they had no intention of capturing more than 10% of any market.
Companies can also change perceptions to make themselves better off. This can be accomplished either by making things clearer or more uncertain. In 1994, the New York Post attempted to make radical price changes in order to get the Daily News to raise its price to regain subscribers. However, the Daily News misunderstood and both newspapers were headed for a price war. The New York Post had to make its intentions clear, and both papers were able to raise their prices and not lose revenue. The authors also show an example of how investment banks can maintain ambiguity to benefit themselves. If the client is more optimistic than the investment bank, the bank can try to charge a higher commission as long as the client does not develop a more realistic appraisal of the company’s value.
Finally, companies can change the boundaries within which they compete. For example, when Sega was unable to gain market share from Nintendo’s 8-bit systems, it changed the game by introducing a new 16-bit system. It took Nintendo 2 years to respond with its own 16-bit system, which gave Sega the opportunity to capture market share and build a strong brand image. This example shows how companies can think outside the box to change the way competition takes place in their industry.
Brandenburger and Nalebuff have illustrated how companies that recognize they can change the rules of competition can vastly improve their odds of success, and sometimes respond in a way that benefits both themselves and the competition. If companies are able to develop a system where they can make both themselves and their competitors better off, then they do not have to worry so much about their competitors trying to counter their moves. Also, because companies can easily copy each other’s ideas, it is to a firm’s advantage if they can benefit when their competitors copy their idea, which is not usually possible under the traditional win-lose structure.
This article has some parallels with the article “Competing on Analytics” by (). The biggest factor that both of these articles have in common is how crucial it is for managers to understand everything they can about their business and the environment in which they work. In “Competing on Analytics”, the authors say that it is important to be familiar with this information so that managers can change the way they compete to improve their chances of success. At the end of “The Right Game: Use Game Theory to Shape Strategy”, the authors discuss how in order for companies to be able to change the environment or rules under which they compete they need to understand everything they can about the constructs under which they are competing. Whether a manager intends to use analytics or game theory to be successful, he or she must first have all available information and use that information to understand how to make the company better off. However, the work shown in “Competing on Analytics” tends to place an emphasis almost exclusively on the use of quantitative data to improve efficiency or market share of the company. “The Right Game”, however focuses more on using information to find creative ways of changing the constructs or rules applied between companies, often yielding a much broader impact.
Read more!
Monday, February 8, 2010
Tuesday, February 2, 2010
Excessive Planning Can Get in the Way of Good Decisions
In their article “Stop Making Plans; Start Making Decisions”, Mankins and Steele discuss the perils of making plans for their organizations. They discuss how the planning process can be time cumbersome and leave less time for implementing decisions. They also claim that the plans they make often become useless when they can finally be implemented. The authors say that organizations need to spend less time making plans, but also make sure the process is constantly updated so that their plans don’t become obsolete later on.
The article discusses how the CEO of ExCom developed a new planning process to improve the quality of the decisions and operations of the organization. This would require the executives meeting with the different heads of management and having long, intensive sessions with them. Unfortunately, this didn’t improve the outcomes of the organization at all, and other organization members did not feel that it worked either. Anonymous respondents said that the process was not only time consuming, it also failed to help managers make real decisions.
The authors say that many executives have lost confidence in the strategic planning process. This is because many of these organizations have invested many resources into developing them, only to find that the plans they made actually end up making decision making more difficult. There are two obstacles that keep strategic planning from working correctly. First of all, the planning is conducted on a periodic basis with the information that is available at the time. However, many changes take place between planning sessions, which makes the plans made obsolete. Secondly, the plans made are made for individual units, but may not necessarily contribute to the success of the organization as a whole. As a result, executives tend to make decisions that are not consistent with the planning criteria that they have established.
The largest problem with periodic planning is that it does not take into account the fact that decision making is an ongoing process. The decisions that managers have to make do not take consideration that the plans that were made earlier did not look at new information and changed variables. When a competitor introduces a new product or a new competitor enters the market, managers have to take that into consideration when they follow a set of plans that were made before this event took place. There are other, less obvious problems that arise as a result of planning. For example, planning at the functional unit level of the organization often causes the functional managers to become irritated with higher-level executives.
The point that the authors want to advocate is that managers must focus on strategic planning that has a direct impact on decision making. They discuss several approaches that successful executives have used to make their planning process more consistent with the decisions they are going to have to make. Most importantly, they keep planning and decision making as two separate processes but make sure to integrate the two. Secondly, they focus on a few key themes. They also make strategy development a continuous process. Finally, they structure strategy reviews to promote real decisions.
The authors say that effective managers must create a process where planning and decision making are done in parallel with each other. This is primarily done by determining the decisions that are going to have to be made, rather than what the final decisions are going to have to be. For example, Boeing sets up financial forecasts and reviews its business plan regularly in order to keep itself on track and be aware of the milestones that it is going to have to overcome. The organization has also developed a Strategy Integration Process to identify and address critical strategic issues that come to light.
Mankins and Steele also claim that successful managers choose a limited number of variables or themes to focus on, and try to make sure that they apply across numerous divisions within the company. This saves the time that it takes to cover all the issues that it would take to focus on a single function in its entirety. It also ensures that they are focusing on issues that are crucial to the organization as a whole. The authors discuss how Microsoft has seven business units, and how every strategy that needs to be implemented must cover at least two of these units.
In order for the strategic development process to be effective, it must be made into a continuous process. This makes is possible for the managers to address a single issue at a time and be ready to make a decision after the planning has been completed. This process also has the advantage of being able to be universally applied throughout the organization, rather than at the business unit level. Textron is a company that has picked up this new approach. Previously, the company had all of its business unit reviews over their second quarter, but now reviews a couple of its business units every quarter. This new planning process has helped Textron go from being an average performer to a superior performer.
Finally, the best planning processes are those that can be reviewed so that these reviews can be used to help make much better decisions. Textron’s initial planning meeting involves reviewing important facts such as profitability of different markets and the actions of consumers and competitors. Evaluating this information is seen as being essential to the later stages in the planning process.
The information in this article coincides with many of the points raised in Davenport’s “Paralysis by Analysis and Extinction by Instinct” article. Both of these articles stress how spending too much time thinking about information can be excessively time consuming and can lead to making decisions with data that is no longer valid. The article also shows similar information to the article “Decision Making: It’s Not What You Think.” Both of these articles discuss how thinking too much about a problem does not do any good unless the decision maker is willing to take action and make taking action a part of the decision making process.
Read more!
The article discusses how the CEO of ExCom developed a new planning process to improve the quality of the decisions and operations of the organization. This would require the executives meeting with the different heads of management and having long, intensive sessions with them. Unfortunately, this didn’t improve the outcomes of the organization at all, and other organization members did not feel that it worked either. Anonymous respondents said that the process was not only time consuming, it also failed to help managers make real decisions.
The authors say that many executives have lost confidence in the strategic planning process. This is because many of these organizations have invested many resources into developing them, only to find that the plans they made actually end up making decision making more difficult. There are two obstacles that keep strategic planning from working correctly. First of all, the planning is conducted on a periodic basis with the information that is available at the time. However, many changes take place between planning sessions, which makes the plans made obsolete. Secondly, the plans made are made for individual units, but may not necessarily contribute to the success of the organization as a whole. As a result, executives tend to make decisions that are not consistent with the planning criteria that they have established.
The largest problem with periodic planning is that it does not take into account the fact that decision making is an ongoing process. The decisions that managers have to make do not take consideration that the plans that were made earlier did not look at new information and changed variables. When a competitor introduces a new product or a new competitor enters the market, managers have to take that into consideration when they follow a set of plans that were made before this event took place. There are other, less obvious problems that arise as a result of planning. For example, planning at the functional unit level of the organization often causes the functional managers to become irritated with higher-level executives.
The point that the authors want to advocate is that managers must focus on strategic planning that has a direct impact on decision making. They discuss several approaches that successful executives have used to make their planning process more consistent with the decisions they are going to have to make. Most importantly, they keep planning and decision making as two separate processes but make sure to integrate the two. Secondly, they focus on a few key themes. They also make strategy development a continuous process. Finally, they structure strategy reviews to promote real decisions.
The authors say that effective managers must create a process where planning and decision making are done in parallel with each other. This is primarily done by determining the decisions that are going to have to be made, rather than what the final decisions are going to have to be. For example, Boeing sets up financial forecasts and reviews its business plan regularly in order to keep itself on track and be aware of the milestones that it is going to have to overcome. The organization has also developed a Strategy Integration Process to identify and address critical strategic issues that come to light.
Mankins and Steele also claim that successful managers choose a limited number of variables or themes to focus on, and try to make sure that they apply across numerous divisions within the company. This saves the time that it takes to cover all the issues that it would take to focus on a single function in its entirety. It also ensures that they are focusing on issues that are crucial to the organization as a whole. The authors discuss how Microsoft has seven business units, and how every strategy that needs to be implemented must cover at least two of these units.
In order for the strategic development process to be effective, it must be made into a continuous process. This makes is possible for the managers to address a single issue at a time and be ready to make a decision after the planning has been completed. This process also has the advantage of being able to be universally applied throughout the organization, rather than at the business unit level. Textron is a company that has picked up this new approach. Previously, the company had all of its business unit reviews over their second quarter, but now reviews a couple of its business units every quarter. This new planning process has helped Textron go from being an average performer to a superior performer.
Finally, the best planning processes are those that can be reviewed so that these reviews can be used to help make much better decisions. Textron’s initial planning meeting involves reviewing important facts such as profitability of different markets and the actions of consumers and competitors. Evaluating this information is seen as being essential to the later stages in the planning process.
The information in this article coincides with many of the points raised in Davenport’s “Paralysis by Analysis and Extinction by Instinct” article. Both of these articles stress how spending too much time thinking about information can be excessively time consuming and can lead to making decisions with data that is no longer valid. The article also shows similar information to the article “Decision Making: It’s Not What You Think.” Both of these articles discuss how thinking too much about a problem does not do any good unless the decision maker is willing to take action and make taking action a part of the decision making process.
Read more!
The Role of Risk in Decision Making
In his article “Decision-Making in the Presence of Risk”, Machina discusses the role that risk plays in making decisions, and what factors affect how risk is supposed to be managed. He begins by saying that if the probability of an event can be predicted, the expected results will be shown as the number of trials converges to infinity. However, if a single trial is run, risk plays a much larger role which will subsequently affect the way decisions are made. He uses the example of the game St. Petersburg Paradox in which the expected value of an outcome was infinity because of the possibly extremely high payoffs, but since the most likely payoffs were closer to a dollar this has a very strong effect on how much an individual would be willing to pay to participate in the game.
The article goes on to discuss how Gabriel Cramer and Daniel Bernoulli developed a utility function that is adjusted for risk. Using this function decision makers can determine whether it is a better deal to take a payoff with no risk attached or a larger payout with a certain degree of risk. Over the last two hundred years, there has been a large number of studies that have shown the validity of this theory if it used appropriately. However, evidence has also shown that people often do not use these models when making a decision and often ignore the results they get when actually making the decision.
The Expected Utility Model for most people can take four different forms: concave utility, convex utility, steep indifference and flat indifference curves. Since most people are reluctant to take risk, the required expected value is a function of risk, requiring a greater return for increased risk. In the concave function, the expected return begins to level off, meaning that the decision maker will not require a much higher rate of return for additional risk. On the other hand, a convex function reaches a point where a much higher rate of return is required for a small amount of risk. In both of these situations, the risk-return relationship varies depending on the amount of risk. In the indifference curves however, the change in expected return is constant, regardless of the amount of risk observed.
In order to determine how an individual feels about risk by experimentally determining what choices they would make in three different situations, each of which has a set of returns each with their respective probabilities. Evidence of using these tests has shown that the choices that decision makers often make decisions that do not follow the logic that they would otherwise make. A simpler approach of determining an individual’s risk aversion preferences to determine what definite return the individual would take instead of taking a risk that would involve a certain payoff or receive nothing at all. Interestingly, decision makers frequently depart from the expected value of the utility function in making this determination.
Researchers have arrived at several theories that explain why decision makers may make these seemingly irrational choices. First of all, they may simply not be very experienced at making decisions where the outcomes could not be easily predicted. Secondly, when someone pointed out to these individuals that their strategies for making decisions were not consistent with the expected outcomes of a utility function. Finally, the decisions made in an experiment may not reflect how decision makers would behave in a real world situation with real risks.
Since so many individuals do not follow the expected model approach, models have been created which take the unique preferences of the decision makers into account as well. Unfortunately, there are limitations to these models as well. One problem is that these models require a unique set of conditions since the incremental change in risk has a different effect on behavior and preferences at different stages. There are also still restrictions based on the variables that go into the function. Therefore, while the preference may better explain individuals decisions they are still not without their own limitations.
While it has been clear that there is not usually a linear relationship between risk and return of decision makers with different risk preferences, there are also other problems with the expected return hypothesis. One problem is that subjects in experiments often will change their preferences and decisions after being made to reconsider them. Another problem is that the way a problem is stated or presented also has a profound effect on how a decision maker will respond. Finally, if probabilities are not clear, it is difficult for decision makers to make a decision that is consistent with the expected utility function.
Read more!
The article goes on to discuss how Gabriel Cramer and Daniel Bernoulli developed a utility function that is adjusted for risk. Using this function decision makers can determine whether it is a better deal to take a payoff with no risk attached or a larger payout with a certain degree of risk. Over the last two hundred years, there has been a large number of studies that have shown the validity of this theory if it used appropriately. However, evidence has also shown that people often do not use these models when making a decision and often ignore the results they get when actually making the decision.
The Expected Utility Model for most people can take four different forms: concave utility, convex utility, steep indifference and flat indifference curves. Since most people are reluctant to take risk, the required expected value is a function of risk, requiring a greater return for increased risk. In the concave function, the expected return begins to level off, meaning that the decision maker will not require a much higher rate of return for additional risk. On the other hand, a convex function reaches a point where a much higher rate of return is required for a small amount of risk. In both of these situations, the risk-return relationship varies depending on the amount of risk. In the indifference curves however, the change in expected return is constant, regardless of the amount of risk observed.
In order to determine how an individual feels about risk by experimentally determining what choices they would make in three different situations, each of which has a set of returns each with their respective probabilities. Evidence of using these tests has shown that the choices that decision makers often make decisions that do not follow the logic that they would otherwise make. A simpler approach of determining an individual’s risk aversion preferences to determine what definite return the individual would take instead of taking a risk that would involve a certain payoff or receive nothing at all. Interestingly, decision makers frequently depart from the expected value of the utility function in making this determination.
Researchers have arrived at several theories that explain why decision makers may make these seemingly irrational choices. First of all, they may simply not be very experienced at making decisions where the outcomes could not be easily predicted. Secondly, when someone pointed out to these individuals that their strategies for making decisions were not consistent with the expected outcomes of a utility function. Finally, the decisions made in an experiment may not reflect how decision makers would behave in a real world situation with real risks.
Since so many individuals do not follow the expected model approach, models have been created which take the unique preferences of the decision makers into account as well. Unfortunately, there are limitations to these models as well. One problem is that these models require a unique set of conditions since the incremental change in risk has a different effect on behavior and preferences at different stages. There are also still restrictions based on the variables that go into the function. Therefore, while the preference may better explain individuals decisions they are still not without their own limitations.
While it has been clear that there is not usually a linear relationship between risk and return of decision makers with different risk preferences, there are also other problems with the expected return hypothesis. One problem is that subjects in experiments often will change their preferences and decisions after being made to reconsider them. Another problem is that the way a problem is stated or presented also has a profound effect on how a decision maker will respond. Finally, if probabilities are not clear, it is difficult for decision makers to make a decision that is consistent with the expected utility function.
Read more!
Judgment Under Uncertainty
In their article “Judgment under Uncertainty: Heuristics and Biases”, Tversky and Kahneman discuss how subjective judgments are made to determine the likelihood of uncertain events prior to decision making. They discuss how decision makers try to assess the probability of an event, but how it is often difficult to do so accurately. Often, decision makers can estimate the general magnitude of an event (i.e. very likely or somewhat likely), but it is more difficult to make a more exact determination.
One of the problems that the authors address is the representative heuristic. In this situation, people might make judgments based on stereotypes without consideration to proportions. They use an example by suggesting that people might guess the likelihood of someone’s occupation by their personality traits without considering the actual number of people employed in that occupation. Assessing probabilities often does not include data on past probabilities of similar events. Experiments have shown that even when participants are told proportions ahead of time, they often ignore these proportions when guessing probabilities of events and base probabilities more on subjective and irrelevant data.
Another problem is that people often have erroneous expectations about probabilities. For example, if a coin is tossed six times, people have a tendency to believe that the sequence H-T-H-T-T-H is more common than the sequence H-H-H-H-T-H. This is because people have a tendency to think that probabilities of overall events also apply to probabilities of local sequences as well. This type of misconception has been shown by gamblers who will make decisions based on events that have taken place up to that point, rather than the probabilities of events occurring or recurring. Educated professionals have also been known to make similar mistakes in judgment.
Another problem is that people often try to make predictions when predictability is very low. For example, people often make decisions about the future profitability of companies based of favorable or unfavorable information. Unfortunately, this information often has little relevance to profitability. Therefore if there is no information that is directly linked to the profitability of the company, the predicted profitability should be considered to be entirely random. Similarly, decision makers often assume information to be more valid than it actually is. More astute individuals also make errors in judgment based on statistical misconceptions, such as normal distribution.
People also often assess likelihoods based on the availability of information. One way they do this is from their own experiences. Another problem is that people often use a means of finding information which is more convenient, even though it may be much less effective. Finally, they have a tendency to imagine or misconceive facts when there are not many that are readily available.
Decision makers often try to estimate an answer by starting at an initial value and then making some adjustments. Unfortunately, this process does not seem to work well because the adjustments are not appropriate. This is often because these adjustments are based on information that has been extrapolated, because the subjects believe that the same patterns will be observed throughout. Another problem witnessed is that subjects had a tendency to select impractical events when it came to disjunctive or conjunctive events as opposed to simple events. One finding claimed that when subjects had to choose between a simple event with a probability of 50% and a disjunctive event with a probability of 48% they would tend to select the less likely disjunctive event. Interestingly, when the subject had to choose between a simple event of 50% and a conjunctive event with a probability of 52% they would tend to choose the simple event. This study was consistent with a general finding that subjects tend to overestimate the likelihood of a conjunctive event and underestimate the likelihood of a disjunctive event.
Throughout the article, the authors discussed different biases that people have in making decisions. They also discussed how these biases were common among educated people as well as to laymen, although the mistakes made by professionals were not usually as elementary as those made by laymen. Educated decision makers tend to try to make decisions based on statistical data that is available to them. However, they often make improper assumptions and inferences from the data that is presented to them, which leads to many of the same misconceptions and errors in judgment.
Read more!
One of the problems that the authors address is the representative heuristic. In this situation, people might make judgments based on stereotypes without consideration to proportions. They use an example by suggesting that people might guess the likelihood of someone’s occupation by their personality traits without considering the actual number of people employed in that occupation. Assessing probabilities often does not include data on past probabilities of similar events. Experiments have shown that even when participants are told proportions ahead of time, they often ignore these proportions when guessing probabilities of events and base probabilities more on subjective and irrelevant data.
Another problem is that people often have erroneous expectations about probabilities. For example, if a coin is tossed six times, people have a tendency to believe that the sequence H-T-H-T-T-H is more common than the sequence H-H-H-H-T-H. This is because people have a tendency to think that probabilities of overall events also apply to probabilities of local sequences as well. This type of misconception has been shown by gamblers who will make decisions based on events that have taken place up to that point, rather than the probabilities of events occurring or recurring. Educated professionals have also been known to make similar mistakes in judgment.
Another problem is that people often try to make predictions when predictability is very low. For example, people often make decisions about the future profitability of companies based of favorable or unfavorable information. Unfortunately, this information often has little relevance to profitability. Therefore if there is no information that is directly linked to the profitability of the company, the predicted profitability should be considered to be entirely random. Similarly, decision makers often assume information to be more valid than it actually is. More astute individuals also make errors in judgment based on statistical misconceptions, such as normal distribution.
People also often assess likelihoods based on the availability of information. One way they do this is from their own experiences. Another problem is that people often use a means of finding information which is more convenient, even though it may be much less effective. Finally, they have a tendency to imagine or misconceive facts when there are not many that are readily available.
Decision makers often try to estimate an answer by starting at an initial value and then making some adjustments. Unfortunately, this process does not seem to work well because the adjustments are not appropriate. This is often because these adjustments are based on information that has been extrapolated, because the subjects believe that the same patterns will be observed throughout. Another problem witnessed is that subjects had a tendency to select impractical events when it came to disjunctive or conjunctive events as opposed to simple events. One finding claimed that when subjects had to choose between a simple event with a probability of 50% and a disjunctive event with a probability of 48% they would tend to select the less likely disjunctive event. Interestingly, when the subject had to choose between a simple event of 50% and a conjunctive event with a probability of 52% they would tend to choose the simple event. This study was consistent with a general finding that subjects tend to overestimate the likelihood of a conjunctive event and underestimate the likelihood of a disjunctive event.
Throughout the article, the authors discussed different biases that people have in making decisions. They also discussed how these biases were common among educated people as well as to laymen, although the mistakes made by professionals were not usually as elementary as those made by laymen. Educated decision makers tend to try to make decisions based on statistical data that is available to them. However, they often make improper assumptions and inferences from the data that is presented to them, which leads to many of the same misconceptions and errors in judgment.
Read more!
New Theories on How Decision Making Can be Improved
In their article “How Can Decision Making be Improved”, Milkman, Chugh and Bzerman discuss some of the ways in which decision makers are biased and how they can attempt to overcome these biases in order to make decisions more optimally. They discuss some of the reasons why effective decision making is important. They then go on to discuss how decision making can be improved.
The authors begin by addressing the fact that errors have many costs associated with them. The authors say that as we are becoming more industrialized and dependent on knowledge, the costs for making poor decisions are higher. Therefore, it is important to understand outcomes and how those outcomes can be improved. This means that decision makers need to know more about their strategies to make better decisions.
The authors say that academic research is expected to help with improving decision making. Professionals in different fields are conducting research to better understand how decisions are made. By understanding how human beings make decisions, they can help them establish how to make decisions better.
The authors discuss some of the theories that have been proposed to prevent against biased decision making. Earlier theories stressed the importance about warning against biases and understanding them, and supplying feedback and offering programs to educate against these biases. Unfortunately, research suggests that these approaches do not seem to be very effective. Newer research has focused more on cognitive processes.
The article stresses how people often lack important information about making decisions or the information they need to analyze in the decision. One theory is that people use two different types of thinking: System 1 and System2. System 1 thinking tends to be faster, more careless and more emotional. System 2 thinking on the other hand is slower, more logical and more careful. When people lack information or feel rushed to make a decision, decision makers are more likely to resort to System 1 thinking.
The authors claim that it is possible to move from System 1 to System 2 thinking. One of their suggestions is for decision makers to use formal analytical processes in exchange of intuition. If data is available that shows a link between two variables, decision makers can create a model or formula to help them make a more thought-out decision. Empirical evidence has shown that using this kind of model results in better decisions.
Another option to improve System 2 thinking is to try to view the situation from the perspective of an outsider. This approach has led to decision makers reducing their overconfidence about the amount of knowledge they had about the problem and being unrealistic about the amount of time it would take them to complete a project or how likely they were to be successful. Decision makers can also be encouraged to play “devil’s advocate” with themselves to reduce decision making biases such as overconfidence, hindsight bias and anchoring.
The authors also discuss a study conducted by Slovic and Fischoff to combat the effects of hindsight bias. Slovic and Fischoff believed that subjects experienced hindsight bias when they were not willing to draw on their knowledge of past situations and apply that knowledge to make a decision. Another group of researchers conducted research which concluded that decision makers need to consider the contributions of other people they are working with in order to overcome this decision making bias. A number of groups have also conducted research that suggests that decision making biases can be overcome by analogical thinking. Finally, decision making biases can be overcome by considering many options simultaneously rather than considering each of them separately.
The authors continue to emphasize through most of the article that System 2 thinking seems to lead to better decisions than System 1 thinking. However, the end of the article discusses how System 1 thinking can also be used in decision making. They describe how the unconscious mind can identify solutions that may be overlooked by the conscious. A new theory has been established which involves changing the environment in which System 1 thinking takes place to improve the decision making process. This strategy involves simulating the environment where the decision would take place so that the decision maker can get a better understanding of how they will be making the decision. This process can help decision makers be more honest about decision making biases they do not like to admit to.
Read more!
The authors begin by addressing the fact that errors have many costs associated with them. The authors say that as we are becoming more industrialized and dependent on knowledge, the costs for making poor decisions are higher. Therefore, it is important to understand outcomes and how those outcomes can be improved. This means that decision makers need to know more about their strategies to make better decisions.
The authors say that academic research is expected to help with improving decision making. Professionals in different fields are conducting research to better understand how decisions are made. By understanding how human beings make decisions, they can help them establish how to make decisions better.
The authors discuss some of the theories that have been proposed to prevent against biased decision making. Earlier theories stressed the importance about warning against biases and understanding them, and supplying feedback and offering programs to educate against these biases. Unfortunately, research suggests that these approaches do not seem to be very effective. Newer research has focused more on cognitive processes.
The article stresses how people often lack important information about making decisions or the information they need to analyze in the decision. One theory is that people use two different types of thinking: System 1 and System2. System 1 thinking tends to be faster, more careless and more emotional. System 2 thinking on the other hand is slower, more logical and more careful. When people lack information or feel rushed to make a decision, decision makers are more likely to resort to System 1 thinking.
The authors claim that it is possible to move from System 1 to System 2 thinking. One of their suggestions is for decision makers to use formal analytical processes in exchange of intuition. If data is available that shows a link between two variables, decision makers can create a model or formula to help them make a more thought-out decision. Empirical evidence has shown that using this kind of model results in better decisions.
Another option to improve System 2 thinking is to try to view the situation from the perspective of an outsider. This approach has led to decision makers reducing their overconfidence about the amount of knowledge they had about the problem and being unrealistic about the amount of time it would take them to complete a project or how likely they were to be successful. Decision makers can also be encouraged to play “devil’s advocate” with themselves to reduce decision making biases such as overconfidence, hindsight bias and anchoring.
The authors also discuss a study conducted by Slovic and Fischoff to combat the effects of hindsight bias. Slovic and Fischoff believed that subjects experienced hindsight bias when they were not willing to draw on their knowledge of past situations and apply that knowledge to make a decision. Another group of researchers conducted research which concluded that decision makers need to consider the contributions of other people they are working with in order to overcome this decision making bias. A number of groups have also conducted research that suggests that decision making biases can be overcome by analogical thinking. Finally, decision making biases can be overcome by considering many options simultaneously rather than considering each of them separately.
The authors continue to emphasize through most of the article that System 2 thinking seems to lead to better decisions than System 1 thinking. However, the end of the article discusses how System 1 thinking can also be used in decision making. They describe how the unconscious mind can identify solutions that may be overlooked by the conscious. A new theory has been established which involves changing the environment in which System 1 thinking takes place to improve the decision making process. This strategy involves simulating the environment where the decision would take place so that the decision maker can get a better understanding of how they will be making the decision. This process can help decision makers be more honest about decision making biases they do not like to admit to.
Read more!
Sometimes Decision Making Requires Thinking in Reverse
In “Decision Making: Going Forward in Reverse”, Einhorn and Hogarth discuss how analyzing past information can be an important way of dealing with the future. The authors discuss how managers are constantly using both of these types of analysis, they do not understand the differences between the two. Since the two processes need to be handled differently, managers often make bad decisions when they don’t understand how they are supposed to handle them.
The authors claim that when thinking backwards decision makers must begin by trying to find a cause and effect relationship. They describe how most decision makers begin this investigation by trying to identify an unusual event or occurrence that may explain the current situation. They then go on to analyze their theories and determine whether or not they seem to adequately explain the situation or resolve the problem. However, there are a number of possible explanations that may explain the situation, so it is difficult to identify which, if any, of the explanations is appropriate. Therefore, they usually try to experiment to narrow the possible explanations and speculate as to which one is most appropriate.
The authors discuss how decision makers often look for links between causes and effects by looking for similarities between them. They use the example how in early medicine physicians believed that jaundice would be cured by a yellow remedy. Obviously, some of the associations that end up coming up end up not making sense. Therefore, people end up considering different levels of association between cause and effect based on cues. The four categories are: causes come before effects, causes and effects occur at approximately the same time, causes vary with effects, and causes may resemble effects. These cues may provide decision makers with a sense of which direction they should go in to investigate.
The authors suggest several different approaches that decision makers can use to think backwards more effectively. One approach is that they can use a number of different metaphors, to guard against the flaws associated with any single metaphor. Decision makers should also use more than one cue, and should also sometimes try to promote creative thinking by going against the cues. Decision makers should also consider how many connections and possible links there are between a cause and effect, and consider that the more links between the two the weaker the chain may be. Finally, decision makers should consider different explanations. These theories should be tested experimentally when possible. However, when experimentation is not possible, decision makers can imagine the situations and circumstances involved to get a better idea of what might happen and how the cause and effect might work.
The authors spend the second half of the article talking about how decision makers think forward. They discuss how most people have a hard time doing so accurately. Interestingly, people tend to have more faith in human judgment than in statistical models, despite the fact that statistical models tend to be much more accurate. Nonetheless, there are a number of reasons why humans tend to not have faith in statistical models.
The primary reason that humans tend to be skeptical of models is that they cannot adequately understand all the variables and the relationships between them. The errors produced by models tend to show up consistently, as opposed to the errors produced by humans which vary. As a result, the inaccuracies in models tend to be more visible and stand out better in the minds of humans. Humans often try to extrapolate their own patterns which they believe can be more accurate than the models they would alternatively use.
Decision makers also are skeptical of models because they believe that they these models are static. In order to fight this bias, models need to be updated and improved based on new information and links that have been learned. One problem that the authors suggest is that it is important to separate accurate predictions from the effects caused by those predictions. They illustrate through the example of the president of the United States making a statement about the economy going through a recession. If a recession does result, it is important to understand if the president actually predicted the recession correctly, or if his statement itself caused the recession.
The final reason why humans tend to avoid using models is that models are often thought to be more costly than they are of value. The authors argue that even though it is difficult to measure the cost-benefit tradeoff of using a model, the models usually will eventually be worth the cost if they are used enough. They illustrate this by showing how AT&T used models to reduce bad debt which was costing them over $100 million a year.
This article was similar to “Competing on Analytics”, (Davenport et. al,2005) because both articles discussed how statistical models can be an enormous value to decision makers. Both articles stress the limitations of human judgment, and how we must consider models that may be capable of producing superior results. Similarly, the article “Automated Decision Making Comes of Age” (Davenport, 2005) discusses the use of computer operated decision making and how it can yield many benefits that cannot be realized from human judgment. All three of these articles stress how even though carefully constructed models can yield better results than human beings, they are still not widely accepted and are met with skepticism by decision makers in the real world.
Read more!
The authors claim that when thinking backwards decision makers must begin by trying to find a cause and effect relationship. They describe how most decision makers begin this investigation by trying to identify an unusual event or occurrence that may explain the current situation. They then go on to analyze their theories and determine whether or not they seem to adequately explain the situation or resolve the problem. However, there are a number of possible explanations that may explain the situation, so it is difficult to identify which, if any, of the explanations is appropriate. Therefore, they usually try to experiment to narrow the possible explanations and speculate as to which one is most appropriate.
The authors discuss how decision makers often look for links between causes and effects by looking for similarities between them. They use the example how in early medicine physicians believed that jaundice would be cured by a yellow remedy. Obviously, some of the associations that end up coming up end up not making sense. Therefore, people end up considering different levels of association between cause and effect based on cues. The four categories are: causes come before effects, causes and effects occur at approximately the same time, causes vary with effects, and causes may resemble effects. These cues may provide decision makers with a sense of which direction they should go in to investigate.
The authors suggest several different approaches that decision makers can use to think backwards more effectively. One approach is that they can use a number of different metaphors, to guard against the flaws associated with any single metaphor. Decision makers should also use more than one cue, and should also sometimes try to promote creative thinking by going against the cues. Decision makers should also consider how many connections and possible links there are between a cause and effect, and consider that the more links between the two the weaker the chain may be. Finally, decision makers should consider different explanations. These theories should be tested experimentally when possible. However, when experimentation is not possible, decision makers can imagine the situations and circumstances involved to get a better idea of what might happen and how the cause and effect might work.
The authors spend the second half of the article talking about how decision makers think forward. They discuss how most people have a hard time doing so accurately. Interestingly, people tend to have more faith in human judgment than in statistical models, despite the fact that statistical models tend to be much more accurate. Nonetheless, there are a number of reasons why humans tend to not have faith in statistical models.
The primary reason that humans tend to be skeptical of models is that they cannot adequately understand all the variables and the relationships between them. The errors produced by models tend to show up consistently, as opposed to the errors produced by humans which vary. As a result, the inaccuracies in models tend to be more visible and stand out better in the minds of humans. Humans often try to extrapolate their own patterns which they believe can be more accurate than the models they would alternatively use.
Decision makers also are skeptical of models because they believe that they these models are static. In order to fight this bias, models need to be updated and improved based on new information and links that have been learned. One problem that the authors suggest is that it is important to separate accurate predictions from the effects caused by those predictions. They illustrate through the example of the president of the United States making a statement about the economy going through a recession. If a recession does result, it is important to understand if the president actually predicted the recession correctly, or if his statement itself caused the recession.
The final reason why humans tend to avoid using models is that models are often thought to be more costly than they are of value. The authors argue that even though it is difficult to measure the cost-benefit tradeoff of using a model, the models usually will eventually be worth the cost if they are used enough. They illustrate this by showing how AT&T used models to reduce bad debt which was costing them over $100 million a year.
This article was similar to “Competing on Analytics”, (Davenport et. al,2005) because both articles discussed how statistical models can be an enormous value to decision makers. Both articles stress the limitations of human judgment, and how we must consider models that may be capable of producing superior results. Similarly, the article “Automated Decision Making Comes of Age” (Davenport, 2005) discusses the use of computer operated decision making and how it can yield many benefits that cannot be realized from human judgment. All three of these articles stress how even though carefully constructed models can yield better results than human beings, they are still not widely accepted and are met with skepticism by decision makers in the real world.
Read more!
Making Important Decisions Under Ambiguity
In their article “Robust Decision-making Under Ambiguity”, Erat and Kavadias discuss how decision makers face ambiguity and deal with it when facing decisions. They state that ambiguity is different than risk in that ambiguity does not come with estimated probabilities from knowledge and previous experience. Risk is difficult to understand and deal with in and of itself, but when risk cannot be quantified it makes managers’ jobs even harder to carry out. The authors discuss portfolio theory, and how experts often cannot even agree on returns of investments, much less the probabilities of a given return being realized. Even with available past data, it is impossible to derive an accurate and reliable distribution. At best, they can only develop a confidence interval.
The authors begin by citing work done by Knight. Knight claimed that decision-making falls into three different types of environments. The known environment is when decision makers understand the state of the world to a meaningful degree. The uncertain environment is where the decision maker does not fully understand the state of the world, but understands the probabilities that they are facing. Finally, the ambiguous environment is where the decision maker is not aware of the likeliness of any state.
In the first case, maximizing the outcome is most straightforward. In the second case, the decision maker must try to maximize the outcome with the probabilities established from previous experiences and events. This is done by summing the probabilities multiplied by the expected outcomes associated with each given probability. The third case is very difficult to assess. In this case, the decision maker should maximize the worst case scenario, so that they can be certain that their outcome will be at least a certain minimum value. The authors discuss how it is possible to express the robust form in the ambiguous environment and the uncertain environment in a way that makes them equal. This results in a state where the ambiguous result can be indistinguishable from an uncertain result.
The authors discuss a real world situation in which ambiguity plays a key role. Customers in the real world desire a certain minimum level of performance and are willing to pay a maximum price. The authors say that for most market needs applications such as this, it is usually assumed that the customer preferences show a normal distribution with a standard deviation. Also, the cost of developing the product increases with the increased performance. Therefore, in order to maximize the number of customers that will purchase the product, the firm has to pay higher development costs. Unfortunately, firms do not know the exact characteristics of their target customers. Since the firm must maximize profits by matching performance with customer expectations, it is difficult to know how to optimize their return. Their best bet is to develop a function for profit based on performance and try to make a good estimate on the performance requirements of the customers.
They illustrate this situation with a sophisticated equation:
Π(P, µ) = ∫ (1/√(2πσ2)е((θ-µ)2/(2σ2)M(θ)dθ – C(P)
The limits shown on the integral on pg. 6 go from the minimum required performance specifications (T) to the number of customers in the market.
This equation basically says that the profit is relatively a function of the price the customer segment is willing to pay minus the cost that it takes to produce the product. This means that summing up the contribution from the entire market will indicate the overall profit from the product.
The authors conclude this example by emphasizing the rest of the concepts that they had already addressed. They said that like any other situation, the managers of the firm should try to address their risks by maximizing their worst case scenerios. In this situation, there is a different worst case scenario for every action that is taken. Therefore, they would need to first identify their action, and then identify their worst case scenario from there.
The authors did a good job explaining the differences between uncertain and ambiguous environments. The message they seemed to be trying to give in the article was that in an ambiguous situation decision makers really cannot maximize their expected outcomes because it is impossible to predict or even guess what those outcomes will probably be. Therefore, the authors advocate maximizing the worst case scenario so that they can be sure to minimize their losses. Unfortunately, they didn’t provide many concrete examples to help illustrate these concepts, and they seemed to be presented in a theoretical way. Also, the model they introduced was very sophisticated and confusing. However, the basic concepts were clear and understandable.
Read more!
The authors begin by citing work done by Knight. Knight claimed that decision-making falls into three different types of environments. The known environment is when decision makers understand the state of the world to a meaningful degree. The uncertain environment is where the decision maker does not fully understand the state of the world, but understands the probabilities that they are facing. Finally, the ambiguous environment is where the decision maker is not aware of the likeliness of any state.
In the first case, maximizing the outcome is most straightforward. In the second case, the decision maker must try to maximize the outcome with the probabilities established from previous experiences and events. This is done by summing the probabilities multiplied by the expected outcomes associated with each given probability. The third case is very difficult to assess. In this case, the decision maker should maximize the worst case scenario, so that they can be certain that their outcome will be at least a certain minimum value. The authors discuss how it is possible to express the robust form in the ambiguous environment and the uncertain environment in a way that makes them equal. This results in a state where the ambiguous result can be indistinguishable from an uncertain result.
The authors discuss a real world situation in which ambiguity plays a key role. Customers in the real world desire a certain minimum level of performance and are willing to pay a maximum price. The authors say that for most market needs applications such as this, it is usually assumed that the customer preferences show a normal distribution with a standard deviation. Also, the cost of developing the product increases with the increased performance. Therefore, in order to maximize the number of customers that will purchase the product, the firm has to pay higher development costs. Unfortunately, firms do not know the exact characteristics of their target customers. Since the firm must maximize profits by matching performance with customer expectations, it is difficult to know how to optimize their return. Their best bet is to develop a function for profit based on performance and try to make a good estimate on the performance requirements of the customers.
They illustrate this situation with a sophisticated equation:
Π(P, µ) = ∫ (1/√(2πσ2)е((θ-µ)2/(2σ2)M(θ)dθ – C(P)
The limits shown on the integral on pg. 6 go from the minimum required performance specifications (T) to the number of customers in the market.
This equation basically says that the profit is relatively a function of the price the customer segment is willing to pay minus the cost that it takes to produce the product. This means that summing up the contribution from the entire market will indicate the overall profit from the product.
The authors conclude this example by emphasizing the rest of the concepts that they had already addressed. They said that like any other situation, the managers of the firm should try to address their risks by maximizing their worst case scenerios. In this situation, there is a different worst case scenario for every action that is taken. Therefore, they would need to first identify their action, and then identify their worst case scenario from there.
The authors did a good job explaining the differences between uncertain and ambiguous environments. The message they seemed to be trying to give in the article was that in an ambiguous situation decision makers really cannot maximize their expected outcomes because it is impossible to predict or even guess what those outcomes will probably be. Therefore, the authors advocate maximizing the worst case scenario so that they can be sure to minimize their losses. Unfortunately, they didn’t provide many concrete examples to help illustrate these concepts, and they seemed to be presented in a theoretical way. Also, the model they introduced was very sophisticated and confusing. However, the basic concepts were clear and understandable.
Read more!
Subscribe to:
Posts (Atom)