## RISK ANALYSIS TECHNIQUES FOR TRADERS- The Empirical Techniques

__RISK ANALYSIS TECHNIQUES FOR TRADERS__

__The Empirical Techniques__

**TO SUMMARIZE THUS FAR**

You have seen that a good system is the one with the highest geometric mean. Yet to find the geometric mean you must know f. You may find this confusing. Here now is a summary and clarification of the process:

Take the trade listing of a given market system.

1. Find the optimal f, either by testing various f values from 0 to 1 or through iteration. The optimal f is that which yields the highest TWR.

2. Once you have found f, you can take the Nth root of the TWR that corresponds to your f, where N is the total number of trades. This is your geometric mean for this market system. You can now use this geometric mean to make apples-to-apples comparisons with other market systems, as well as use the f to know how many contracts to trade for that particular market system.

Once the highest f is found, it can readily be turned into a dollar amount by dividing the biggest loss by the negative optimal f. For example, if our biggest loss is $100 and our optimal f is .25, then -$100/-.25 = $400. In other words, we should bet 1 unit for every $400 we have in our stake. If you're having trouble with some of these concepts, try thinking in terms of betting in units, not dollars. The number of dollars you allocate to each unit is calculated by figuring your largest loss divided by the negative optimal f. The optimal f is a result of the balance between a system's profit-making ability and its risk. Most people think that the optimal fixed fraction is that percentage of your total stake to bet, This is absolutely false.

There is an interim step involved. Optimal f is not in itself the percentage of your total stake to bet, it is the divisor of your biggest loss. The quotient of this division is what you divide your total stake by to know how many bets to make or contracts to have on. You will also notice that margin has nothing whatsoever to do with what is the mathematically optimal number of contracts to have on. Margin doesn't matter because the sizes of individual profits and losses are not the product of the amount of money put up as margin. Rather, the profits and losses are the product of the exposure of 1 unit. The amount put up as margin is further made meaningless in a money-management sense, because the size of the loss is not limited to the margin.

Most people incorrectly believe that f is a straight-line function rising up and to the right. They believe this because they think it would mean that the more you are willing to risk the more you stand to make. People reason this way because they think that a positive mathematical expectancy is just the mirror image of a negative expectancy. They mistakenly believe that if increasing your total action in a negative expectancy game results in losing faster, then increasing your total action in a positive expectancy game will result in winning faster. This is not true. At some point in a positive expectancy situation, further increasing your total action works against you. That point is a function of both the system's profitability and its consistency (i.e., its geometric mean), since you are reinvesting the returns back into the system.

It is a mathematical fact that when two people face the same sequence of favorable betting or trading opportunities, if one uses the optimal f and the other uses any different money-management system, then the ratio of the optimal f bettor's stake to the other person's stake will increase as time goes on, with higher and higher probability. In the long run, the optimal f bettor will have infinitely greater wealth than any other money-management system bettor with a probability approaching 1. Furthermore, if a bettor has the goal of reaching a specified fortune and is facing a series of favorable betting or trading opportunities, the expected time to reach the fortune will be lower (faster) with optimal f than with any other betting system.

Let's go back and reconsider the following sequence of bets (trades):

+9, +18, +7, +1, +10, -5, -3, -17, -7

Recall that we determined that the Kelly formula was not applicable to this sequence, because the wins were not all for the same amount and neither were the losses. We also decided to average the wins and average the losses and take these averages as our values into the Kelly formula (as many traders mistakenly do). Doing this we arrived at an f value of .16. It was stated that this is an incorrect application of Kelly, that it would not yield the optimal f. The Kelly formula must be specific to a single bet. You cannot average your wins and losses from trading and obtain the true optimal fusing the Kelly formula.

Our highest TWR on this sequence of bets (trades) is obtained at .24, or betting $1 for every $71 in our stake. That is the optimal geometric growth you can squeeze out of this sequence of bets (trades) trading fixed fraction. Let's look at the TWRs at different points along 100 loops through this sequence of bets. At 1 loop through (9 bets or trades), the TWR for f = ,16 is 1.085, and for f = .24 it is 1.096. This means that for 1 pass through this sequence of bets an f = .16 made 99% of what an f = .24 would have made. To continue:

As can be seen, using an f value that we mistakenly figured from Kelly only made 37.5% as much as did our optimal f of .24 after 900 bets or trades. In other words, our optimal f of .24, which is only .08 different from .16 (50% beyond the optimal) made almost 267% the profit that f = .16 did after 900 bets! Let's go another 11 cycles through this sequence of trades, so that we now have a total of 999 trades. Now our TWR for f = .16 is 8563.302 (not even what it was for f = .24 at 900 trades) and our TWR for f = .24 is 25,451.045. At 999 trades f = .16 is only 33.6% off = .24, or f = .24 is 297% off = .16!

As you see, using the optimal f does not appear to offer much advantage over the short run, but over the long run it becomes more and more important. The point is, you must give the program time when trading at the optimal f and not expect miracles in the short run. The more time (i.e., bets or trades) that elapses, the greater the difference between using the optimal f and any other money-management strategy.

**GEOMETRIC AVERAGE TRADE**

At this point the trader may be interested in figuring his or her geometric average trade-that is, what is the average garnered per contract per trade assuming profits are always reinvested and fractional contracts can be purchased. This is the mathematical expectation when you are trading on a fixed fractional basis. This figure shows you what effect there is by losers occurring when you have many contracts on and winners occurring when you have fewer contracts on. In effect, this approximates how a system would have fared per contract per trade doing fixed fraction.

(Actually the geometric average trade is your mathematical expectation in dollars per contract per trade. The geometric mean minus 1 is your mathematical expectation per trade-a geometric mean of 1.025 represents a mathematical expectation of 2.5% per trade, irrespective of size.) Many traders look only at the average trade of a market system to see if it is high enough to justify trading the system. However, they should be looking at the geometric average trade (GAT) in making their decision.

(1.14) GAT = G*(Biggest Loss/-f)

where,

G = Geometric mean-1.

f = Optimal fixed fraction.

For example, suppose a system has a geometric mean of 1.017238, the biggest loss is $8,000, and the optimal f is .31. Our geometric average trade would be:

GAT = (1.017238-1)*(-$8,000/-.31)

= .017238*$25,806.45

= $444.85

**WHY YOU MUST KNOW YOUR OPTIMAL F**

Let's increase the winning payout from 2 units to 5 units. Here your optimal f is .4, or to bet $1 for every $2.50 in you stake. After 20 sequences of +5,-l (40 bets), your $2.50 stake has grown to $127,482, thanks to optimal f. Now look what happens in this extremely favorable situation if you miss the optimal f by 20%. At f values of .6 and .2 you don't make a tenth as much as you do at .4. This particular situation, a 50/50 bet paying 5 to 1, has a mathematical expectation of (5*.5)+(1*(-.5)) = 2, yet if you bet using an f value greater than .8 you lose money.

*Figure 20 sequences of +5, -1.*

Two points must be illuminated here. The first is that whenever we discuss a TWR, we assume that in arriving at that TWR we allowed fractional contracts along the way. In other words, the TWR assumes that you are able to trade 5.4789 contracts if that is called for at some point. It is because the TWR calculation allows for fractional contracts that the TWR will always be the same for a given set of trade outcomes regardless of their sequence. You may argue that in real life this is not the case. In real life you cannot trade fractional contracts. Your argument is correct. However, I am allowing the TWR to be calculated this way because in so doing we represent the average TWR for all possible starting stakes. If you require that all bets be for integer amounts, then the amount of the starting stake becomes important.

However, if you were to average the TWRs from all possible starting stake values using integer bets only, you would arrive at the same TWR value that we calculate by allowing the fractional bet. Therefore, the TWR value as calculated is more realistic than if we were to constrain it to integer bets only, in that it is representative of the universe of outcomes of different starting stakes. Furthermore, the greater the equity in the account, the more trading on an integer contract basis will be the same as trading on a fractional contract basis. The limit here is an account with an infinite amount of capital where the integer bet and fractional bet are for the same amounts exactly. This is interesting in that generally the closer you can stick to optimal f, the better. That is to say that the greater the capitalization of an account, the greater will be the effect of optimal f.

Since optimal f will make an account grow at the fastest possible rate, we can state that optimal f will make itself work better and better for you at the fastest possible rate. The graphs bear out a few more interesting points. The first is that at no other fixed fraction will you make more money than you will at optimal f. In other words, it does not pay to bet $1 for every $2 in your stake in the earlier example of a 5:1 game. In such a case you would make more money if you bet $1 for every $2.50 in your stake. It does not pay to risk more than the optimal f-in fact, you pay a price to do so! Obviously, the greater the capitalization of an account the more accurately you can stick to optimal f, as the dollars per single contract required are a smaller percentage of the total equity. For example, suppose optimal f for a given market system dictates you trade 1 contract for every $5,000 in an account.

If an account starts out with $10,000 in equity, it will need to gain (or lose) 50% before a quantity adjustment is necessary. Contrast this to a $500,000 account, where there would be a contract adjustment for every 1% change in equity. Clearly the larger account can better take advantage of the benefits provided by optimal f than can the smaller account. Theoretically, optimal f assumes you can trade in infinitely divisible quantities, which is not the case in real life, where the smallest quantity you can trade in is a single contract. In the asymptotic sense this does not matter. But in the real-life integer-bet scenario, a good case could be presented for trading a market system that requires as small a percentage of the account equity as possible, especially for smaller accounts. But there is a tradeoff here as well.

Since we are striving to trade in markets that would require us to trade in greater multiples than other markets, we will be paying greater commissions, execution costs, and slippage. Bear in mind that the amount required per contract in real life is the greater of the initial margin requirement and the dollar amount per contract dictated by the optimal f. The finer you can cut it (i.e., the more frequently you can adjust the size of the positions you are trading so as to align yourself with what the optimal f dictates), the better off you are. Most accounts would therefore be better off trading the smaller markets. Corn may not seem like a very exciting market to you compared to the S&P's. Yet for most people the corn market can get awfully exciting if they have a few hundred contracts on. Those who trade stocks or forwards (such as forex traders) have a tremendous advantage here.

Since you must calculate your optimal f based on the outcomes (the P&Ls) on a 1-contract (1 unit) basis, you must first decide what 1 unit is in stocks or forex. As a stock trader, say you decide that I unit will be 100 shares. You will use the P&L stream generated by trading 100 shares on each and every trade to determine your optimal f. When you go to trade this particular stock (and let's say your system calls for trading 2.39 contracts or units), you will be able to trade the fractional part (the .39 part) by putting on 239 shares. Thus, by being able to trade the fractional part of 1 unit, you are able to take more advantage of optimal f. Likewise for forex traders, who must first decide what 1 contract or unit is. For the forex trader, 1 unit may be one million U.S. dollars or one million Swiss francs.

**THE SEVERITY OF DRAWDOWN**

It is important to note at this point that the drawdown you can expect with fixed fractional trading, as a percentage retracement of your account equity, historically would have been at least as much as f percent. In other words if f is .55, then your drawdown would have been at least 55% of your equity. This is so because if you are trading at the optimal f, as soon as your biggest loss was hit, you would experience the drawdown equivalent to f. Again, assuming that f for a system is .55 and assuming that translates into trading 1 contract for every $10,000, this means that your biggest loss was $5,500. As should by now be obvious, when the biggest loss was encountered, you would have lost $5,500 for each contract you had on, and would have had 1 contract on for every $10,000 in the account. At that point, your drawdown is 55% of equity.

Moreover, the drawdown might continue: The next trade or series of trades might draw your account down even more. Therefore, the better a system, the higher the f. The higher the f, generally the higher the drawdown, since the drawdown can never be any less than the f as a percentage. There is a paradox involved here in that if a system is good enough to generate an optimal f that is a high percentage, then the drawdown for such a good system will also be quite high. Whereas optimal fallows you to experience the greatest geometric growth, it also gives you enough rope to hang yourself with. Most traders harbor great illusions about the severity of drawdowns. Further, most people have fallacious ideas regarding the ratio of potential gains to dispersion of those gains. We know that if we are using the optimal f when we are fixed fractional trading, we can expect substantial drawdowns in terms of percentage equity retracements.

Optimal f is like plutonium. It gives you a tremendous amount of power, yet it is dreadfully dangerous. These substantial drawdowns are truly a problem, particularly for notices, in that trading at the optimal f level gives them the chance to experience a cataclysmic loss sooner than they ordinarily might have. Diversification can greatly buffer the drawdowns. This it does, but the reader is warned not to expect to eliminate drawdown. In fact, the real benefit of diversification is that it lets you get off many more trials, many more plays, in the same time period, thus increasing your total profit. Diversification, although usually the best means by which to buffer drawdowns, does not necessarily reduce drawdowns, and in some instances, may actually increase them! Many people have the mistaken impression that drawdown can be completely eliminated if they diversify effectively enough.

To an extent this is true, in that drawdowns can be buffered through effective diversification, but they can never be completely eliminated. Do not be deluded. No matter how good the systems employed are, no matter how effectively you diversify, you will still encounter substantial drawdowns. The reason is that no matter of how uncorrelated your market systems are, there comes a period when most or all of the market systems in your portfolio zig in unison against you when they should be zagging. You will have enormous difficulty finding a portfolio with at least 5 years of historical data to it and all market systems employing the optimal f that has had any less than a 30% drawdown in terms of equity retracement! This is regardless of how many market systems you employ. If you want to be in this and do it mathematically correctly, you better expect to be nailed for 30% to 95% equity retracements.

This takes enormous discipline, and very few people can emotionally handle this. When you dilute f, although you reduce the drawdowns arithmetically, you also reduce the returns geometrically. Why commit funds to futures trading that aren't necessary simply to flatten out the equity curve at the expense of your bottom-line profits? You can diversify cheaply somewhere else. Any time a trader deviates from always trading the same constant contract size, he or she encounters the problem of what quantities to trade in. This is so whether the trader recognizes this problem or not. Constant contract trading is not the solution, as you can never experience geometric growth trading constant contract. So, like it or not, the question of what quantity to take on the next trade is inevitable for everyone. To simply select an arbitrary quantity is a costly mistake. Optimal f is factual; it is mathematically correct.

**MODERN PORTFOLIO THEORY**

Recall the paradox of the optimal f and a market system's drawdown. The better a market system is, the higher the value for f. Yet the drawdown (historically) if you are trading the optimal f can never be lower than f. Generally speaking, then, the better the market system is, the greater the drawdown will be as a percentage of account equity if you are trading optimal f. That is, if you want to have the greatest geometric growth in an account, then you can count on severe drawdowns along the way. Effective diversification among other market systems is the most effective way in which this drawdown can be buffered and conquered while still staying close to the peak of the f curve (i.e., without hating to trim back to, say, f/2).

When one market system goes into a drawdown, another one that is being traded in the account will come on strong, thus canceling the draw-down of the other. This also provides for a catalytic effect on the entire account. The market system that just experienced the drawdown (and now is getting back to performing well) will have no less funds to start with than it did when the drawdown began (thanks to the other market system canceling out the drawdown). Diversification won't hinder the upside of a system (quite the reverse-the upside is far greater, since after a drawdown you aren't starting back with fewer contracts), yet it will buffer the downside (but only to a very limited extent).

There exists a quantifiable, optimal portfolio mix given a group of market systems and their respective optimal fs. Although we cannot be certain that the optimal portfolio mix in the past will be optimal in the future, such is more likely than that the optimal system parameters of the past will be optimal or near optimal in the future. Whereas optimal system parameters change quite quickly from one time period to another, optimal portfolio mixes change very slowly (as do optimal f values). Generally, the correlations between market systems tend to remain constant. This is good news to a trader who has found the optimal portfolio mix, the optimal diversification among market systems.

**THE MARKOVITZ MODEL**

The basic concepts of modern portfolio theory emanate from a monograph written by Dr. Harry Markowitz.5 Essentially, Markowitz proposed that portfolio management is one of composition, not individual stock selection as is more commonly practiced. Markowitz argued that diversification is effective only to the extent that the correlation coefficient between the markets involved is negative. If we have a portfolio composed of one stock, our best diversification is obtained if we choose another stock such that the correlation between the two stock prices is as low as possible. The net result would be that the portfolio, as a whole (composed of these two stocks with negative correlation), would have less variation in price than either one of the stocks alone.

Markowitz proposed that investors act in a rational manner and, given the choice, would opt for a similar portfolio with the same return as the one they have, but with less risk, or opt for a portfolio with a higher return than the one they have but with the same risk. Further, for a given level of risk there is an optimal portfolio with the highest yield, and likewise for a given yield there is an optimal portfolio with the lowest risk. An investor with a portfolio whose yield could be increased with no resultant increase in risk, or an investor with a portfolio whose risk could be lowered with no resultant decrease in yield, are said to have inefficient portfolios. If you hold portfolio C, you would be better off with portfolio A, where you would have the same return with less risk, or portfolio B, where you would have more return with the same risk.

*Figure Modern portfolio theory.*

In describing this, Markowitz described what is called the efficient frontier. This is the set of portfolios that lie on the upper and left sides of the graph. These are portfolios whose yield can no longer be increased without increasing the risk and whose risk cannot be lowered without lowering the yield. Portfolios lying on the efficient frontier are said to be efficient portfolios.

*Figure The efficient frontier*

Those portfolios lying high and off to the right and low and to the left are generally not very well diversified among very many issues. Those portfolios lying in the middle of the efficient frontier are usually very well diversified. Which portfolio a particular investor chooses is a function of the investor's risk aversion-Ms or her willingness to assume risk. In the Markowitz model any portfolio that lies upon the efficient frontier is said to be a good portfolio choice, but where on the efficient frontier is a matter of personal preference. The Markowitz model was originally introduced as applying to a portfolio of stocks that the investor would hold long. Therefore, the basic inputs were the expected returns on the stocks (defined as the expected appreciation in share price plus any dividends), the expected variation in those returns, and the correlations of the different returns among the different stocks.

If we were to transport this concept to futures it would stand to reason (since futures don't pay any dividends) that we measure the expected price gains, variances, and correlations of the different futures. The question arises, "If we are measuring the correlation of prices, what if we have two systems on the same market that are negatively correlated?" In other words, suppose we have systems A and B. There is a perfect negative correlation between the two. When A is in a drawdown, B is in a drawup and vice versa. Isn't this really an ideal diversification? What we really want to measure then is not the correlations of prices of the markets we're using. Rather, we want to measure the correlations of daily equity changes between the different market system. Yet this is still an apples-and-oranges comparison.

Say that two of the market systems we are going to examine the correlations on are both trading the same market, yet one of the systems has an optimal f corresponding to I contract per every $2,000 in account equity and the other system has an optimal f corresponding to 1 contract per every $10,000 in account equity. To overcome this and incorporate the optimal fs of the various market systems under consideration, as well as to account for fixed fractional trading, we convert the daily equity changes for a given market system into daily HPRs. The HPR in this context is how much a particular market made or lost for a given day on a 1-contract basis relative to what the optimal f for that system is. Here is how this can be solved. Say the market system with an optimal f of $2,000 made $100 on a given day. The HPR then for that market system for that day is 1.05. To find the daily HPR, then:

(1.15) Daily HPR = (A/B)+1

where,

A = Dollars made or lost that day.

B = Optimal fin dollars.

We begin by converting the daily dollar gains and losses for the market systems we are looking at into daily HPRs relative to the optimal fin dollars for a given market system. In so doing, we make quantity irrelevant. In the example just cited, where your daily HPR is 1.05, you made 5% that day on that money. This is 5% regardless of whether you had on 1 contract or 1,000 contracts. Now you are ready to begin comparing different portfolios. The trick here is to compare every possible portfolio combination, from portfolios of 1 market system (for every market system under consideration) to portfolios of N market systems.

As an example, suppose you are looking at market systems A, B, and C. Every combination would be:

A

B

C

AB

AC

BC

ABC

But you do not stop there. For each combination you must figure each Percentage allocation as well. To do so you will need to have a minimum Percentage increment. The following example, continued from the portfolio A, B, C example, illustrates this with a minimum portfolio allocation of 10% (.10):

A 100%

B 100%

C 100%

AB 90% 10%

80% 20%

70% 30%

60% 40%

50% 50%

40% 60%

30% 70%

20% 80%

10% 90%

AC 90% 10%

80% 20%

70% 30%

60% 40%

50% 50%

40% 60%

30% 70%

20% 80%

10% 90%

BC 90% 10%

80% 20%

70% 30%

60% 40%

50% 50%

40% 60%

30% 70%

20% 80%

10% 90%

ABC 80% 10% 10%

70% 20% 10%

70% 10% 20%

10% 30% 60%

10% 20% 70%

10% 10% 80%

Now for each CPA we go through each day and compute a net HPR for each day. The net HPR for a given day is the sum of each market system's HPR for that day times its percentage allocation. For example, suppose for systems A, B, and C we are looking at percentage allocations of 10%, 50%, 40% respectively. Further, suppose that the individual HPRs for those market systems for that day are .9, 1.4, and 1.05 respectively. Then the net HPR for this day is:

Net HPR = (.9*.1)+(1.4*.5)+(1.05*.4)

= .09+.7+.42

= 1.21

We must perform now two necessary tabulations. The first is that of the average daily net HPR for each CPA. This comprises the reward or Y axis of the Markowitz model. The second necessary tabulation is that of the standard deviation of the daily net HPRs for a given CPA- specifically, the population standard deviation. This measure corresponds to the risk or X axis of the Markowitz model.

Modern portfolio theory is often called E-V Theory, corresponding to the other names given the two axes. The vertical axis is often called E, for expected return, and the horizontal axis V, for variance in expected returns. From these first two tabulations we can find our efficient frontier. We have effectively incorporated various markets, systems, and f factors, and we can now see quantitatively what our best CPAs are (i.e., which CPAs lie along the efficient frontier).

**THE GEOMETRIC MEAN PORTFOLIO STRATEGY**

Which particular point on the efficient frontier you decide to be on (i.e., which particular efficient CPA) is a function of your own risk-aversion preference, at least according to the Markowitz model. However, there is an optimal point to be at on the efficient frontier, and finding this point is mathematically solvable. If you choose that CPA which shows the highest geometric mean of the HPRs, you will arrive at the optimal CPA!

We can estimate the geometric mean from the arithmetic mean HPR and the population standard deviation of the HPRs (both of which are calculations we already have, as they are the X and Y axes for the Markowitz model!). Equations (1.16a) and (l.16b) give us the formula for the estimated geometric mean (EGM). This estimate is very close to the actual geometric mean, and it is acceptable to use the estimated geometric mean and the actual geometric mean interchangeably.

(1.16a) EGM = (AHPR^2-SD^2)^(1/2)

or

(l.16b) EGM = (AHPR^2-V)^(1/2)

where,

EGM = The estimated geometric mean.

AHPR = The arithmetic average HPR, or the return coordinate of the portfolio.

SD = The standard deviation in HPRs, or the risk coordinate of the portfolio.

V = The variance in HPRs, equal to SD^2.

Both forms of Equation are equivalent. The CPA with the highest geometric mean is the CPA that will maximize the growth of the portfolio value over the long run; furthermore it will minimize the time required to reach a specified level of equity.

**DAILY PROCEDURES FOR USING OPTIMAL PORTFOLIOS**

At this point, there may be some question as to how you implement this portfolio approach on a day-to-day basis. Again an example will be used to illustrate. Suppose your optimal CPA calls for you to be in three different market systems. In this case, suppose the percentage allocations are 10%, 50%, and 40%. If you were looking at a $50,000 account, your account would be "subdivided" into three accounts of $5,000, $25,000, and $20,000 for each market system (A, B, and C) respectively. For each market system's subaccount balance you then figure how many contracts you could trade. Say the f factors dictated the following:

Market system A, 1 contract per $5,000 in account equity.

Market system B, 1 contract per $2,500 in account equity.

Market system C, l contract per $2,000 in account equity.

You would then be trading 1 contract for market system A ($5,000/$5,000), 10 contracts for market system B ($25,000/$2,500), and 10 contracts for market system C ($20,000/$2,000). Each day, as the total equity in the account changes, all subaccounts are recapitalized. What is meant here is, suppose this $50,000 account dropped to $45,000 the next day. Since we recapitalize the subaccounts each day, we then have $4,500 for market system subaccount A, $22,500 for market system subaccount B, and $18,000 for market system subaccount C, from which we would trade zero contracts the next day on market system A ($4,500 7 $5,000 = .9, or, since we always floor to the integer, 0), 9 contracts for market system B ($22,500/$2,500), and 9 contracts for market system C ($18,000/$2,000).

You always recapitalize the subaccounts each day regardless of whether there was a profit or a loss. Do not be confused. Subaccount, as used here, is a mental construct. Another way of doing this that will give us the same answers and that is perhaps easier to understand is to divide a market system's optimal f amount by its percentage allocation. This gives us a dollar amount that we then divide the entire account equity by to know how many contracts to trade. Since the account equity changes daily, we recapitalize this daily to the new total account equity. In the example we have cited, market system A, at an f value of 1 contract per $5,000 in account equity and a percentage allocation of 10%, yields 1 contract per $50,000 in total account equity ($5,000/.10). Market system B, at an f value of 1 contract per $2,500 in account equity and a percentage allocation of 50%, yields 1 contract per $5,000 in total account equity ($2,500/.50).

Market system C, at an f value of 1 contract per $2,000 in account equity and a percentage allocation of 40%, yields 1 contract per $5,000 in total account equity ($2,000/.40). Thus, if we had $50,000 in total account equity, we would trade 1 contract for market system A, 10 contracts for market system B, and 10 contracts for market system C. Tomorrow we would do the same thing. Say our total account equity got up to $59,000. In this case, dividing $59,000 into $50,000 yields 1.18, which floored to the integer is 1, so we would trade 1 contract for market system A tomorrow. For market system B, we would trade 11 contracts ($59,000/$5,000 = 11.8, which floored to the integer = 11). For market system C we would also trade 11 contracts, since market system C also trades 1 contract for every $5,000 in total account equity.

Suppose we have a trade on from market system C yesterday and we are long 10 contracts. We do not need to go in and add another today to bring us up to 11 contracts. Rather the amounts we are calculating using the equity as of the most recent close mark-to-market is for new positions only. So for tomorrow, since we have 10 contracts on, if we get stopped out of this trade (or exit it on a profit target), we will be going 11 contracts on a new trade if one should occur. Determining our optimal portfolio using the daily HPRs means that we should go in and alter our positions on a day-by-day rather than a trade-by-trade basis, but this really isn't necessary unless you are trading a longer-term system, and then it may not be beneficial to adjust your position size on a day-by-day basis due to increased transaction costs. In a pure sense, you should adjust your positions on a day-by-day basis.

In real life, you are usually almost as well off to alter them on a trade-by-trade basis, with little loss of accuracy. This matter of implementing the correct daily positions is not such a problem. Recall that in finding the optimal portfolio we used the daily HPRs as input, We should therefore adjust our position size daily (if we could adjust each position at the price it closed at yesterday). In real life this becomes impractical, however, as transaction costs begin to out-weigh the benefits of adjusting our positions daily and may actually cost us more than the benefit of adjusting daily. We are usually better off adjusting only at the end of each trade. The fact that the portfolio is temporarily out of balance after day 1 of a trade is a lesser price to pay than the cost of adjusting the portfolio daily.

On the other hand, if we take a position that we are going to hold for a year, we may want to adjust such a position daily rather than adjust it more than a year from now when we take another trade. Generally, though, on longer-term systems such as this we are better off adjusting the position each week, say, rather than each day. The reasoning here again is that the loss in efficiency by having the portfolio temporarily out of balance is less of a price to pay than the added transaction costs of a daily adjustment. You have to sit down and determine which is the lesser penalty for you to pay, based upon your trading strategy (i.e., how long you are typically in a trade) as well as the transaction costs involved.

How long a time period should you look at when calculating the optimal portfolios? Just like the question, "How long a time period should you look at to determine the optimal f for a given market system?" there is no definitive answer here. Generally, the more back data you use, the better should be your result (i.e., that the near optimal portfolios in the future will resemble what your study concluded were the near optimal portfolios). However, correlations do change, albeit slowly. One of the problems with using too long a time period is that there will be a tendency to use what were yesterday's hot markets. For instance, if you ran this program in 1983 over 5 years of back data you would most likely have one of the precious metals show very clearly as being a part of the optimal portfolio.

However, the precious metals did very poorly for most trading systems for quite a few years after the 1980-1981 markets. So you see there is a tradeoff between using too much past history and too little in the determination of the optimal portfolio of the future. Finally, the question arises as to how often you should rerun this entire procedure of finding the optimal portfolio. Ideally you should run this on a continuous basis. However, rarely will the portfolio composition change. Realistically you should probably run this about every 3 months. Even by running this program every 3 months there is still a high likelihood that you will arrive at the same optimal portfolio composition, or one very similar to it, that you arrived at before.

**ALLOCATIONS GREATER THAN 100%**

Thus far, we have been restricting the sum of the percentage allocations to 100%. It is quite possible that the sum of the percentage allocations for the portfolio that would result in the greatest geometric growth would exceed 100%. Consider, for instance, two market systems, A and B, that are identical in every respect, except that there is a negative correlation (R<0) between them. Assume that the optimal f, in dollars, for each of these market systems is $5,000. Suppose the optimal portfolio proves to be that portfolio that allocates 50% to each of the two market systems. This would mean that you should trade 1 contract for every $10,000 in equity for market system A and likewise for B. When there is negative correlation, however, it can be shown that the optimal account growth is actually obtained by trading 1 contract for an amount less than $10,000 in equity for market system A and/or market system B.

In other words, when there is negative correlation, you can have the sum of percentage allocations exceed 100%. Further, it is possible, although not too likely, that the individual percentage allocations to the market systems may exceed 100% individually. It is interesting to consider what happens when the correlation between two market systems approaches -1.00. When such an event occurs, the amount to finance trades by for the market systems tends to become infinitesimal. This is so because the portfolio, the net result of the market systems, tends to never suffer a losing day (since an amount lost by a market system on a given day is offset by the same amount being won by a different market system in the portfolio that day). Therefore, with diversification it is possible to have the optimal portfolio allocate a smaller f factor in dollars to a given market system than trading that market system alone would.

To accommodate this, you can divide the optimal f in dollars for each market system by the number of market systems you are running. In our example, rather than inputting $5,000 as the optimal f for market system A, we would input $2,500 (dividing $5,000, the optimal f, by 2, the number of market systems we are going to run), and likewise for market system B. Now when we use this procedure to determine the optimal geomean portfolio as being the one that allocates 50% to A and 50% to B, it means that we should trade 1 contract for every $5,000 in equity for market system A ($2,500/.5) and likewise for B. You must also make sure to use cash as another market system. This is non-interest-bearing cash, and it has an HPR of 1.00 for every day.

Suppose in our previous example that the optimal growth is obtained at 50% in market system A and 40% in market system B. In other words, to trade 1 contract for every $5,000 in equity for market system A and 1 contract for every $6,250 for B ($2,500/.4). If we were using cash as another market system, this would be a possible combination (showing the optimal portfolio as having the remaining 10% in cash). If we were not using cash as another market system, this combination wouldn't be possible. If your answer obtained by using this procedure does not include the non-interest-bearing cash as one of the output components, then you must raise the factor you are using to divide the optimal fs in dollars you are using as input. Returning to our example, suppose we used non-interest-bearing cash with the two market systems A and B.

Further suppose that our resultant optimal portfolio did not include at least some percentage allocation to non-interest bearing cash. Instead, suppose that the optimal portfolio turned out to be 60% in market system A and 40% in market system B (or any other percentage combination, so long as they added up to 100% as a sum for the percentage allocations for the two market systems) and 0% allocated to non-interest-bearing cash. This would mean that even though we divided our optimal fs in dollars by two, that was not enough, We must instead divide them by a number higher than 2. So we will go back and divide our optimal fs in dollars by 3 or 4 until we get an optimal portfolio which includes a certain percentage allocation to non-interest-bearing cash. This will be the optimal portfolio.

Of course, in real life this does not mean that we must actually allocate any of our trading capital to non-interest-bearing cash, Rather, the non-interest-bearing cash was used to derive the optimal amount of funds to allocate for 1 contract to each market system, when viewed in light of each market system's relationship to each other market system. Be aware that the percentage allocations of the portfolio that would have resulted in the greatest geometric growth in the past can be in excess of 100% and usually are. This is accommodated for in this technique by dividing the optimal f in dollars for each market system by a specific integer (which usually is the number of market systems) and including non-interest-bearing cash (i.e., a market system with an HPR of 1.00 every day) as another market system.

The correlations of the different market systems can have a profound effect on a portfolio. It is important that you realize that a portfolio can be greater than the sum of its parts. It is also possible that a portfolio may be less than the sum of its parts. Consider again a coin-toss game, a game where you win $2 on heads and lose $1 on tails. Such a game has a mathematical expectation (arithmetic) of fifty cents. The optimal f is .25, or bet $1 for every $4 in your stake, and results in a geometric mean of 1.0607. Now consider a second game, one where the amount you can win on a coin toss is $.90 and the amount you can lose is $1.10. Such a game has a negative mathematical expectation of -$.10, thus, there is no optimal f, and therefore no geometric mean either. Consider what happens when we play both games simultaneously.

If the second game had a correlation coefficient of 1.0 to the first-that is, if we won on both games on heads or both coins always came up either both heads or both tails, then the two possible net outcomes would be that we win $2.90 on heads or lose $2.10 on tails. Such a game would have a mathematical expectation then of $.40, an optimal f of .14, and a geometric mean of 1.013. Obviously, this is an inferior approach to just trading the positive mathematical expectation game. Now assume that the games are negatively correlated. That is, when the coin on the game with the positive mathematical expectation comes up heads, we lose the $1.10 of the negative expectation game and vice versa. Thus, the net of the two games is a win of $.90 if the coins come up heads and a loss of -$.10 if the coins come up tails. The mathematical expectation is still $.40, yet the optimal f is .44, which yields a geometric mean of 1.67.

Recall that the geometric mean is the growth factor on your stake on average per play. This means that on average in this game we would expect to make more than 10 times as much per play as in the outright positive mathematical expectation game. Yet this result is obtained by taking that positive mathematical expectation game and combining it with a negative expectation game. The reason for the dramatic difference in results is due to the negative correlation between the two market systems. Here is an example where the portfolio is greater than the sum of its parts. Yet it is also important to bear in mind that your drawdown, historically, would have been at least as high as f percent in terms of percentage of equity retraced. In real life, you should expect that in the future it will be higher than this.

This means that the combination of the two market systems, even though they are negatively correlated, would have resulted in at least a 44% equity retracement. This is higher than the outright positive mathematical expectation which resulted in an optimal f of .25, and therefore a minimum historical drawdown of at least 25% equity retracement. The moral is clear. Diversification, if done properly, is a technique that increases returns. It does not necessarily reduce worst-case drawdowns. This is absolutely contrary to the popular notion. Diversification will buffer many of the little pullbacks from equity highs, but it does not reduce worst-case drawdowns. Further, as we have seen with optimal f, drawdowns are far greater than most people imagine. Therefore, even if you are very well diversified, you must still expect substantial equity retracements.

However, let's go back and look at the results if the correlation coefficient between the two games were 0. In such a game, whatever the results of one toss were would have no bearing on the results of the other toss. Thus, there are four possible outcomes:

The mathematical expectation is thus:

ME = 2.9*.25+.9*.25-.1*.25-2.1*.25 = .725+.225-.025-.525 = .4

Once again, the mathematical expectation is $.40. The optimal f on this sequence is .26, or 1 bet for every $8.08 in account equity (since the biggest loss here is -$2.10). Thus, the least the historical drawdown may have been was 26% (about the same as with the outright positive expectation game). However, here is an example where there is buffering of the equity retracements. If we were simply playing the outright positive expectation game, the third sequence would have hit us for the maximum drawdown. Since we are combining the two systems, the third sequence is buffered. But that is the only benefit. The resultant geometric mean is 1.025, less than half the rate of growth of playing just the outright positive expectation game. We placed 4 bets in the same time as we would have placed 2 bets in the outright positive expectation game, but as you can see, still didn't make as much money:

1.0607^2 = 1.12508449 1.025^ 4 = 1.103812891

Clearly, when you diversify you must use market systems that have as low a correlation in returns to each other as possible and preferably a negative one. You must realize that your worst-case equity retracement will hardly be helped out by the diversification, although you may be able to buffer many of the other lesser equity retracements. The most important thing to realize about diversification is that its greatest benefit is in what it can do to improve your geometric mean. The technique for finding the optimal portfolio by looking at the net daily HPRs eliminates having to look at how many trades each market system accomplished in determining optimal portfolios.

Using the technique allows you to look at the geometric mean alone, without regard to the frequency of trading. Thus, the geometric mean becomes the single statistic of how beneficial a portfolio is. There is no benefit to be obtained by diversifying into more market systems than that which results in the highest geometric mean. This may mean no diversification at all if a portfolio of one market system results in the highest geometric mean. It may also mean combining market systems that you would never want to trade by themselves.

**HOW THE DISPERSION OF OUTCOMES AFFECTS GEOMETRIC GROWTH**

Once we acknowledge the fact that whether we want to or not, whether consciously or not, we determine our quantities to trade in as a function of the level of equity in an account, we can look at HPRs instead of dollar amounts for trades. In so doing, we can give money management specificity and exactitude. We can examine our money-management strategies, draw rules, and make conclusions. One of the big conclusions, one that will no doubt spawn many others for us, regards the relationship of geometric growth and the dispersion of outcomes (HPRs).

This discussion will use a gambling illustration for the sake of simplicity. Consider two systems, System A, which wins 10% of the time and has a 28 to 1 win/loss ratio, and System B, which wins 70% of the time and has a 1 to 1 win/loss ratio. Our mathematical expectation, per unit bet, for A is 1.9 and for B is .4. We can therefore say that for every unit bet System A will return, on average, 4.75 times as much as System B. But let's examine this under fixed fractional trading. We can find our optimal fs here by dividing the mathematical expectations by the win/loss ratios. This gives us an optimal f of .0678 for A and .4 for B.

The geometric means for each system at their optimal f levels are then:

A = 1.044176755

B = 1.0857629

As you can see, System B, although less than one quarter the mathematical expectation of A, makes almost twice as much per bet (returning 8.57629% of your entire stake per bet on average when you reinvest at the optimal f levels) as does A (which returns 4.4176755% of your entire stake per bet on average when you reinvest at the optimal f levels). Now assuming a 50% drawdown on equity will require a 100% gain to recoup, then 1.044177 to the power of X is equal to 2.0 at approximately X equals 16.5, or more than 16 trades to recoup from a 50% drawdown for System A.

Contrast this to System B, where 1.0857629 to the power of X is equal to 2.0 at approximately X equals 9, or 9 trades for System B to recoup from a 50% drawdown. What's going on here? Is this because System B has a higher percentage of winning trades? The reason B is outperforming A has to do with the dispersion of outcomes and its effect on the growth function. Most people have the mistaken impression that the growth function, the TWR, is:

(1.17) TWR = (1+R)^N

where,

R = The interest rate per period (e.g., 7% = .07).

N = The number of periods.

Since 1+R is the same thing as an HPR, we can say that most people have the mistaken impression that the growth function,6 the TWR, is:

(1.18) TWR = HPR^N

This function is only true when the return (i.e., the HPR) is constant, which is not the case in trading. The real growth function in trading (or any event where the HPR is not constant) is the multiplicative product of the HPRs. Assume we are trading coffee, our optimal f is 1 contract for every $21,000 in equity, and we have 2 trades, a loss of $210 and a gain of $210, for HPRs of .99 and 1.01 respectively. In this example our TWR would be:

TWR = 1.01*.99 = .9999

An insight can be gained by using the estimated geometric mean (EGM) for Equation (1.16a):

(1.16a) EGM = (AHPR^2-SD^2)^(1/2)

or

(1.16b) EGM = (AHPR^2-V)^(1/2)

Now we take Equation (1.16a) or (1.16b) to the power of N to estimate the TWR. This will very closely approximate the "multiplicative" growth function, the actual TWR:

(1.19a) Estimated TWR = ((AHPR^2-SD^2)^(1/2))^N

or

(1.19b) Estimated TWR = ((AHPR^2-V)^(1/2))^N

where,

N = The number of periods.

AHPR = The arithmetic mean HPR.

SD = The population standard deviation in HPRs.

V = The population variance in HPRs.

The two equations are equivalent. The insight gained is that we can see here, mathematically, the tradeoff between an increase in the arithmetic average trade (the HPR) and the variance in the HPRs, and hence the reason that the 70% 1:1 system did better than the 10% 28:1 system!

Our goal should be to maximize the coefficient of this function, to maximize:

(1.16b) EGM = (AHPR^2-V)^(1/2)

Expressed literally, our goal is "To maximize the square root of the quantity HPR squared minus the population variance in HPRs." The exponent of the estimated TWR, N, will take care of itself. That is to say that increasing N is not a problem, as we can increase the number of markets we are following, can trade more short-term types of systems, and so on. However, these statistical measures of dispersion, variance, and standard deviation (V and SD respectively), are difficult for most non-statisticians to envision. What many people therefore use in lieu of these measures is known as the mean absolute deviation (which we'll call M). Essentially, to find M you simply take the average absolute value of the difference of each data point to an average of the data points.

(1.20) M = ∑ABS(Xi-X[])/N

In a bell-shaped distribution (as is almost always the case with the distribution of P&L's from a trading system) the mean absolute deviation equals about .8 of the standard deviation (in a Normal Distribution, it is .7979). Therefore, we can say:

(1.21) M = .8*SD

and

(1.22) SD = 1.25*M

We will denote the arithmetic average HPR with the variable A, and the geometric average HPR with the variable G. Using Equation (1.16b), we can express the estimated geometric mean as:

(1.16b) G = (A^2-V)^(1/2)

From this equation, we can obtain:

(1.23) G^2 = (A^2-V)

Now substituting the standard deviation squared for the variance :

(1.24) G^2 = A^2-SD^2

From this equation we can isolate each variable, as well as isolating zero to obtain the fundamental relationships between the arithmetic mean, geometric mean, and dispersion, expressed as SD ^ 2 here:

(1.25) A^2-C^2-SD^2 = 0

(1.26) G^2 = A^2-SD^2

(1.27) SD^2 = A^2-G^2

(1.28) A^2 = G^2+SD^2

In these equations, the value SD^2 can also be written as V or as (1.25*M)^2.

This brings us to the point now where we can envision exactly what the relationships are. Notice that the last of these equations is the familiar Pythagorean Theorem: The hypotenuse of a right angle triangle squared equals the sum of the squares of its sides! But here the hypotenuse is A, and we want to maximize one of the legs, G.

In maximizing G, any increase in D (the dispersion leg, equal to SD or V ^ (1/2) or 1.25*M) will require an increase in A to offset. When D equals zero, then A equals G, thus conforming to the misconstrued growth function TWR = (1+R)^N. Actually when D equals zero, then A equals G per Equation (1.26).

So, in terms of their relative effect on G, we can state that an increase in A ^ 2 is equal to a decrease of the same amount in (1.25*M)^2.

(1.29) ∆A^2 = -A((1.25*M)^2)

To see this, consider when A goes :

When A = 1.1, we are given an SD of .1. When A = 1.2, to get an equivalent G, SD must equal .4899 per Equation (1.27). Since M = .8*SD, then M = .3919. If we square the values and take the difference, they are both equal to .23.

Consider the following:

Notice that in the previous example, where we started with lower dispersion values (SD or M), how much proportionally greater an increase was required to yield the same G. Thus we can state that the more you reduce your dispersion, the better, with each reduction providing greater and greater benefit. It is an exponential function, with a limit at the dispersion equal to zero, where G is then equal to A.

A trader who is trading on a fixed fractional basis wants to maximize G, not necessarily A. In maximizing G, the trader should realize that the standard deviation, SD, affects G in the same proportion as does A, per the Pythagorean Theorem! Thus, when the trader reduces the standard deviation (SD) of his or her trades, it is equivalent to an equal increase in the arithmetic average HPR (A), and vice versa!

**THE FUNDAMENTAL EQUATION OF TRADING**

We can glean a lot more here than just how trimming the size of our losses improves our bottom line. We return now to equation :

(1.19a) Estimated TWR = ((AHPR^2-SD^2)^(1/2))^N

We again replace AHPR with A, representing the arithmetic average HPR. Also, since (X^Y)^Z = X^(Y*Z), we can further simplify the exponents in the equation, thus obtaining:

(1.19c) Estimated TWR = (A^2-SD^2)^(N/2)

This last equation, the simplification for the estimated TWR, we call the fundamental equation for trading, since it describes how the different factors, A, SD, and N affect our bottom line in trading. A few things are readily apparent. The first of these is that if A is less than or equal to 1, then regardless of the other two variables, SD and N, our result can be no greater than 1. If A is less than 1, then as N approaches infinity, A approaches zero.

This means that if A is less than or equal to 1, we do not stand a chance at making profits. In fact, if A is less than 1, it is simply a matter of time (i.e., as N increases) until we go broke. Provided that A is greater than 1, we can see that increasing N increases our total profits. For each increase of 1 trade, the coefficient is further multiplied by its square root. For instance, suppose your system showed an arithmetic mean of 1.1, and a standard deviation of .25. Thus:

Estimated TWR = (1.1^2-.25^2)^(N/2) = (1.21-.0625)^(N/2) = 1.1475^(N/2)

Each time we can increase N by 1, we increase our TWR by a factor equivalent to the square root of the coefficient. In the case of our example, where we have a coefficient of 1.1475, then 1.1475^(1/2) = 1.071214264. Thus every trade increase, every 1-point increase in N, is the equivalent to multiplying our final stake by 1.071214264. Notice that this figure is the geometric mean. Each time a trade occurs, each time N is increased by 1, the coefficient is multiplied by the geometric mean. Herein is the real benefit of diversification expressed mathematically in the fundamental equation of trading. Diversification lets you get more N off in a given period of time.

The other important point to note about the fundamental trading equation is that it shows that if you reduce your standard deviation more than you reduce your arithmetic average HPR, you are better off. It stands to reason, therefore, that cutting your losses short, if possible, benefits you. But the equation demonstrates that at some point you no longer benefit by cutting your losses short. That point is the point where you would be getting stopped out of too many trades with a small loss that later would have turned profitable, thus reducing your A to a greater extent than your SD. Along these same lines, reducing big winning trades can help your program if it reduces your SD more than it reduces your A.

In many cases, this can be accomplished by incorporating options into your trading program. Having an option position that goes against your position in the underlying can possibly help. For instance, if you are long a given stock (or commodity), buying a put option (or writing a call option) may reduce your SD on this net position more than it reduces your A. If you are profitable on the underlying, you will be unprofitable on the option, but profitable overall, only to a lesser extent than had you not had the option position. Hence, you have reduced both your SD and your A. If you are unprofitable on the underlying, you will have increased your A and decreased your SD. All told, you will tend to have reduced your SD to a greater extent than you have reduced your A.

Of course, transaction costs are a large consideration in such a strategy, and they must always be taken into account. Your program may be too short-term oriented to take advantage of such a strategy, but it does point out the fact that different strategies, along with different trading rules, should be looked at relative to the fundamental trading equation. In doing so, we gain an insight into how these factors will affect the bottom line, and what specifically we can work on to improve our method. Suppose, for instance, that our trading program was long-term enough that the aforementioned strategy of buying a put in conjunction with a long position in the underlying was feasible and resulted in a greater estimated TWR.

Such a position, a long position in the underlying and a long put, is the equivalent to simply being outright long the call. Hence, we are better off simply to be long the call, as it will result in considerably lower transaction costs'7 than being both long the underlying and long the put option. To demonstrate this, we'll use the extreme example of the stock indexes in 1987. Let's assume that we can actually buy the underlying OEX index. The system we will use is a simple 20-day channel breakout. Each day we calculate the highest high and lowest low of the last 20 days. Then, throughout the day if the market comes up and touches the high point, we enter long on a stop. If the system comes down and touches the low point, we go short on a stop. If the daily opens are through the entry points, we enter on the open. The system is always in the market:

If we were to determine the optimal f on this stream of trades, we would find its corresponding geometric mean, the growth factor on our stake per play, to be 1.12445. Now we will take the exact same trades, only, using the BlackScholes stock option pricing model, we will convert the entry prices to theoretical option prices. The inputs into the pricing model are the historical volatility determined on a 20-day basis, a risk-free rate of 6%, and a 260.8875-day year. Further, we will assume that we are buying options with exactly .5 of a year left till expiration (6 months) and that they are at-the-money. In other words, that there is a strike price corresponding to the exact entry price. Buying long a call when the system goes long the underlying, and buying long a put when the system goes short the underlying, using the parameters of the option pricing model mentioned, would have resulted in a trade stream as follows:

If we were to determine the optimal f on this stream of trades, we would find its corresponding geometric mean, the growth factor on our stake per play, to be 1.2166, which compares to the geometric mean at the optimal f for the underlying of 1.12445. This is an enormous difference. Since there are a total of 6 trades, we can raise each geometric mean to the power of 6 to determine the TWR on our stake at the end of the 6 trades. This returns a TWR on the underlying of 2.02 versus a TWR on the options of 3.24. Subtracting 1 from each TWR translates these results to percentage gains on our starting stake, or a 102% gain trading the underlying and a 224% gain making the same trades in the options. The options are clearly superior in this case, as the fundamental equation of trading testifies.

Trading long the options outright as in this example may not always be superior to being long the underlying instrument. This example is an extreme case, yet it does illuminate the fact that trading strategies should be looked at in light of the fundamental equation for trading in order to be judged properly. As you can see, the fundamental trading equation can be utilized to dictate many changes in our trading. These changes may be in the way of tightening (or loosening) our stops, setting targets, and so on. These changes are the results of inefficiencies in the way we are carrying out our trading as well as inefficiencies in our trading program or methodology.

## No comments:

## Post a Comment