BUSINESS DECISION MODELS (İŞLETME KARAR MODELLERİ) - (İNGİLİZCE) - Chapter 3: Decision Making Under Risk Özeti :
PAYLAŞ:Chapter 3: Decision Making Under Risk
Introduction
The decision-making that is simply described as making choices among multiple choices is a future-oriented action. Decision theory utilizes certain knowledge and techniques to deal with the best decision-making under certain unknowns of the future. There is an awareness that during life very few things can be taken into account as certain, and uncertainty is the reality of life. The degree of certainty has two extreme points; as complete certainty and complete uncertainty. The region under these two points refers to decision making under risk.
The concepts of risk and uncertainty differ in terms of the meanings which they contain. Risk can be understood as a potential loss, so it has a negative meaning. On the other hand, uncertainty, which implies the absence of certainty of the outcome in a particular situation does not have a positive or negative meaning. Uncertainty becomes a risk when future conditions can be defined and the probability can be calculated. In other words, risk means a measurable uncertainty
Decision Making Under Risk
In decision making under risk, there exist more than one state of nature and the decision maker has sufficient information to assign probabilities to the occurrence of each of these states. The probabilities can be obtained from past records or the subjective judgment of the decision maker. In other word, decision situations in which the chance of occurrence of each state of nature is known or can be estimated are defined as decisions made under risk. Such decision-making problems are also known as probabilistic or stochastic decision problems.
The most important advantage of the decision maker to have known of the possibility of a state of nature is that the decision maker can calculate how much risk to the gain/cost it expects to achieve according to the decision alternative it adopts. Based on the past experience of an investor, determining the possibility of stagnation in the economy will help the decision making by using this possibility in the investment period.
In the decision making process under risk, a probability value must be determined for the chance of realization of all states of nature in the strategy table and the sum of all probabilities of states of nature must be equal to 1. After these possibilities are added to the strategy table, the decision maker should determine which approach will be used to select the best course of action. These approaches are Expected Value (EV) (or Expected Monetary Value (EMV)), Expected Opportunity Loss (EOL), and the Maximum Probability.
Expected Value (EV) (or Expected Monetary Value (EMV)) Criterion
The Expected Value (EV) is an anticipated value for a given investment at some point in the future. In decision making at risk, the expected value criterion is commonly used when comparing alternatives, based on maximizing expected profit or minimizing expected loss. The expected value criterion attempts to find the expected profit maximized or the expected cost minimized. The profit or cost that will arise from each alternative is handled by certain possibilities. When the decision’s consequences involve only money, we can calculate the Expected Monetary Value (EMV). Expected value is simply the sum of all the potential consequences multiplied by their associated probabilities. For calculation, each possible outcome is multiplied by the associated state of nature probabilities, then these multiplication values are summing for each line and the expected gain or loss value for the relevant decision alternative is calculated.
Expected Opportunity Loss (EOL) Criterion
In decision making, instead of expected monetary value (EMV), it can also be used the expected opportunity loss (EOL) criterion, which is another approach based on a decision will be taken, also known as expected regret. Expected opportunity loss, demonstrate the average additional amount the investor would have achieved by making the right decision instead of a wrong one. The basic idea of this criterion is that people try to minimize their regret or opportunity loss. In another saying, an opportunity loss (or regret) can be explained as the loss incurred due to failure to select the best alternative available.
Maximum Probability Criterion
In this criterion, the decision maker confronts the various possible states of nature in a decision under risk and; he or she chooses the alternative that is best for the most likely state of nature, rather than calculating in all states of nature. With another saying, it states that the decision maker should ignore all possible events except the one most likely to occur, and should select the best possible result (maximum gain or minimum loss) in the given circumstances. While this criterion has the advantage of simplicity, ignoring substantive information related to the less likely states of nature makes it a weaker decision making criterion. Because it does not mean that the state of nature which has a low probability will not occur. On the contrary, it indicates that there is a possibility of occurrence of the event, but it has a low chance of occurring according to other alternatives.
Expected Value of Perfect Information (EVPI)
In decision theory, the expected value of perfect information is the maximum amount of price you would be willing to pay for additional information about a decision problem. Since there is always a possibility that the decision turns out to be wrong, there is always some degree of uncertainty about the decision. The expected value of perfect information analysis tries to measure the expected cost of that uncertainty because the perfect information can eliminate the possibility of making the wrong decision. In other words, the expected value of perfect information is used to place an upper limit on what you should pay for information that will aid in making a better decision.
Decision Tree
In some cases, where the result is deterministic or probabilistic, it is tried to select a course of action from a limited number of alternatives and data is presented as a decision table. In real life, decision problems are generally more complex, and determinations of a large number of variables and probabilities by the researchers make the decision problems more complicated. Solving decision problems that have more details, adopting graphical or visual approaches will facilitate the decision maker work. The traditional graphical technique used in the solution of the decision problem is the decision tree. A decision tree is basically a graphical exposition of decision tables. The decision trees can help a decision maker to develop a clear view of the structure of a problem and make it easier to determine the possible scenarios which can result if a particular course of action is chosen.
A decision tree is composed of some components as; branches, decision nodes, chance nodes, and payoffs.
- Branch: The line connecting the nodes on a decision tree is called a branch. A branch is a single strategy that connects either two nodes or a node and an outcome. When a decision tree is drawn, a general approach is the direction from left to right is shown; therefore, the line that comes to the right of a decision node is called the decision branch, while the line to the right of a chance node is called the chance branch. In decision making problems, decision branches are used to represent alternatives (strategies) and chance branches are used to show the states of nature (events). The chance branch is labelled with a probability which represents the decision maker’s estimate of the probability that a particular branch will be followed.
- Decision node: The decision node, represented in a square shape on the decision tree is a point from which two or more branches emerge. Each branch from a decision node represents a possible alternative to be chosen by the decision-maker. Since the decision tree is usually initiated by the first decision, a decision node positioned at the left side of the decision tree is also the starting node, which is also referred to as the root node. The decision-maker decides by making a selection from at least two alternatives in the decision node.
- Chance node: A chance node, represented in a circle shape on the decision tree, indicates that one of a finite number of states of nature is expected to occur at this point in the process. The states of nature are shown on the tree as branches to the right of the chance nodes and the assumed probabilities of the states of nature are written above the branches.
- Payoff: In decision analysis payoff refers to the consequence resulting from a specific combination of a decision alternative and a state of nature. Payoffs can be expressed in terms of profit, cost, time, distance, or any other measure appropriate for the decision problem being analyzed.
Decision trees are created in a flow directed from left to right in the horizontal direction. The starting node is usually a decision node. Once the decision node is constructed in the tree, all possible alternatives related to this node, are added to drawing as the branches from the node to the right side (decision branches). Then, a chance node or other decision node corresponding to events or decisions expected to occur after the initial decision is added and the drawing of the tree is continued until the payoffs are reached. When a path from the start to the end is followed, the gain or loss is written to the end of the branch. Thus, the decision tree displays all the components of the problem on a single graph.
Bayes’ Theorem
Bayes’ theorem is used as a normative tool, which tells how it should be revised our probability assessments when new information becomes available. In other words, Bayes’ theorem is basically a process of revising the known possibilities of an event under the light of new knowledge.
In the use of Bayes’ theorem in solving a decision problem; firstly the prior probability distribution of the parameters to be estimated according to the subjective and objective information obtained is determined. Then, the distribution of the parameters after the information is determined according to the additional information obtained from the sample and the type of movement which gives a maximum profit or minimum cost value according to this distribution is determined. On this context in a multistage decision tree, all probability branches at the right of the tree are conditional on outcomes that have occurred earlier, to their left. With the formula used in Bayes’s theorem, while the result of a given event is certain, the possibility of the causes of this result is investigated. From the other perspective, in Bayes’ formula, the cause and result are displaced.