### RISK ANALYSIS TECHNIQUES FOR TRADERS- The Empirical Techniques

__RISK ANALYSIS TECHNIQUES FOR TRADERS__

__The Empirical Techniques__

**DECIDING ON QUANTITY**

Whenever you enter a trade, you have made two decisions: Not only have you decided whether to enter long or short, you have also decided upon the quantity to trade in. This decision regarding quantity is always a function of your account equity. If you have a $10,000 account, don't you think you would be leaning into the trade a little if you put on 100 gold contracts? Likewise, if you have a $10 million account, don't you think you'd be a little light if you only put on one gold contract ? Whether we acknowledge it or not, the decision of what quantity to have on for a given trade is inseparable from the level of equity in our account. It is a very fortunate fact for us though that an account will grow the fastest when we trade a fraction of the account on each and every trade-in other words, when we trade a quantity relative to the size of our stake.

However, the quantity decision is not simply a function of the equity in our account, it is also a function of a few other things. It is a function of our perceived "worst-case" loss on the next trade. It is a function of the speed with which we wish to make the account grow. It is a function of dependency to past trades. More variables than these just mentioned may be associated with the quantity decision, yet we try to agglomerate all of these variables, including the account's level of equity, into a subjective decision regarding quantity: How many contracts or shares should we put on? In this discussion, you will learn how to make the mathematically correct decision regarding quantity. You will no longer have to make this decision subjectively. You will see that there is a steep price to be paid by not having on the correct quantity, and this price increases as time goes by.

Most traders gloss over this decision about quantity. They feel that it is somewhat arbitrary in that it doesn't much matter what quantity they have on. What matters is that they be right about the direction of the trade. Furthermore, they have the mistaken impression that there is a straight-line relationship between how many contracts they have on and how much they stand to make or lose in the long run. This is not correct. As we shall see in a moment, the relationship between potential gain and quantity risked is not a straight line. It is curved. There is a peak to this curve, and it is at this peak that we maximize potential gain per quantity at risk. Furthermore, as you will see throughout this discussion, the decision regarding quantity for a given trade is as important as the decision to enter long or short in the first place.

Contrary to most traders' misconception, whether you are right or wrong on the direction of the market when you enter a trade does not dominate whether or not you have the right quantity on. Ultimately, we have no control over whether the next trade will be profitable or not. Yet we do have control over the quantity we have on. Since one does not dominate the other, our resources are better spent concentrating on putting on the tight quantity. On any given trade, you have a perceived worst-case loss. You may not even be conscious of this, but whenever you enter a trade you have some idea in your mind, even if only subconsciously, of what can happen to this trade in the worst-case. This worst-case perception, along with the level of equity in your account, shapes your decision about how many contracts to trade.

Thus, we can now state that there is a divisor of this biggest perceived loss, a number between 0 and 1 that you will use in determining how many contracts to trade. For instance, if you have a $50,000 account, if you expect, in the worst case, to lose $5,000 per contract, and if you have on 5 contracts, your divisor is .5, since:

50,000/(5,000/.5) = 5

In other words, you have on 5 contracts for a $50,000 account, so you have 1 contract for every $10,000 in equity. You expect in the worst case to lose $5,000 per contract, thus your divisor here is .5. If you had on only 1 contract, your divisor in this case would be .1 since:

50,000/(5,000/.l) = 1

*Figure 20 sequences of +2, -1.*

This divisor we will call by its variable name f. Thus, whether consciously or subconsciously, on any given trade you are selecting a value for f when you decide how many contracts or shares to put on. This represents a game where you have a 50% chance of winning $2 versus a 50% chance of losing $1 on every play. Notice that here the optimal f is .25 when the TWR is 10.55 after 40 bets (20 sequences of +2, -1). TWR stands for Terminal Wealth Relative. It represents the return on your stake as a multiple. A TWR of 10.55 means you would have made 10.55 times your original stake, or 955% profit. Now look at what happens if you bet only 15% away from the optimal .25 f. At an f of .1 or .4 your TWR is 4.66. This is not even half of what it is at .25, yet you are only 15% away from the optimal and only 40 bets have elapsed! How much are we talking about in terms of dollars? At f = .1, you would be making 1 bet for every $10 in your stake.

At f = .4, you would be making I bet for every $2.50 in your stake. Both make the same amount with a TWR of 4.66. At f = .25, you are making 1 bet for every $4 in your stake. Notice that if you make 1 bet for every $4 in your stake, you will make more than twice as much after 40 bets as you would if you were making 1 bet for every $2.50 in your stake! Clearly it does not pay to overbet. At 1 bet per every $2.50 in your stake you make the same amount as if you had bet a quarter of that amount, 1 bet for every $10 in your stake! Notice that in a 50/50 game where you win twice the amount that you lose, at an f of .5 you are only breaking even! That means you are only breaking even if you made 1 bet for every $2 in your stake. At an f greater than .5 you are losing in this game, and it is simply a matter of time until you are completely tapped out! In other words, if your fin this 50/50, 2:1 game is .25 beyond what is optimal, you will go broke with a probability that approaches certainty as you continue to play.

Our goal, then, is to objectively find the peak of the f curve for a given trading system. In this discussion certain concepts will be illuminated in terms of gambling illustrations. The main difference between gambling and speculation is that gambling creates risk whereas speculation is a transference of an already existing risk (supposedly) from one party to another. The gambling illustrations are used to illustrate the concepts as clearly and simply as possible. The mathematics of money management and the principles involved in trading and gambling are quite similar. The main difference is that in the math of gambling we are usually dealing with Bernoulli outcomes (only two possible outcomes), whereas in trading we are dealing with the entire probability distribution that the trade may take.

**BASIC CONCEPTS**

A probability statement is a number between 0 and 1 that specifies how probable an outcome is, with 0 being no probability whatsoever of the event in question occurring and 1 being that the event in question is certain to occur. An independent trials process is a sequence of outcomes where the probability statement is constant from one event to the next. A coin toss is an example of just such a process. Each toss has a 50/50 probability regardless of the outcome of the prior toss. Even if the last 5 flips of a coin were heads, the probability of this flip being heads is unaffected and remains .5. Naturally, the other type of random process is one in which the outcome of prior events does affect the probability statement, and naturally, the probability statement is not constant from one event to the next. These types of events are called dependent trials processes.

Blackjack is an example of just such a process. Once a card is played, the composition of the deck changes. Suppose a new deck is shuffled and a card removed-say, the ace of diamonds. Prior to removing this card the probability of drawing an ace was 4/52 or .07692307692. Now that an ace has been drawn from the deck, and not replaced, the probability of drawing an ace on the next draw is 3/51 or .05882352941. Try to think of the difference between independent and dependent trials processes as simply whether the probability statement is fixed (independent trials) or variable (dependent trials) from one event to the next based on prior outcomes. This is in fact the only difference.

**THE RUNS TEST**

When we do sampling without replacement from a deck of cards, we can determine by inspection that there is dependency. For certain events (such as the profit and loss stream of a system's trades) where dependency cannot be determined upon inspection, we have the runs test. The runs test will tell us if our system has more (or fewer) streaks of consecutive wins and losses than a random distribution. The runs test is essentially a matter of obtaining the Z scores for the win and loss streaks of a system's trades. A Z score is how many standard deviations you are away from the mean of a distribution. Thus, a Z score of 2.00 is 2.00 standard deviations away from the mean (the expectation of a random distribution of streaks of wins and losses).

The Z score is simply the number of standard deviations the data is from the mean of the Normal Probability Distribution. For example, a Z score of 1.00 would mean that the data you arc testing is within 1 standard deviation from the mean. Incidentally, this is perfectly normal. The Z score is then converted into a confidence limit, sometimes also called a degree of certainty. The area under the curve of the Normal Probability Function at 1 standard deviation on either side of then mean equals 68% of the total area under the curve. So we take our Z score and convert it to a confidence limit, the relationship being that the Z score is a number of standard deviations from the mean and the confidence limit is the percentage of area under the curve occupied at so many standard deviations.

With a minimum of 30 closed trades we can now compute our Z scores. What we are trying to answer is how many streaks of wins (losses) can we expect from a given system? Are the win (loss) streaks of the system we are testing in line with what we could expect? If not, is there a high enough confidence limit that we can assume dependency exists between trades -i.e., is the outcome of a trade dependent on the outcome of previous trades?

Here then is the equation for the runs test, the system's Z score:

(1.01) Z = (N*(R-.5)-X)/((X*(X-N))/(N-1))^(1/2)

where,

N = The total number of trades in the sequence.

R = The total number of runs in the sequence.

X = 2*W*L

W = The total number of winning trades in the sequence.

L = The total number of losing trades in the sequence.

Here is how to perform this computation:

1. Compile the following data from your run of trades:

A. The total number of trades, hereafter called N.

B. The total number of winning trades and the total number of losing trades. Now compute what we will call X. X = 2*Total Number of Wins*Total Number of Losses.

C. The total number of runs in a sequence. We'll call this R.

2. Let's construct an example to follow along with. Assume the following trades:

-3 +2 +7 -4 +1 -1 +1 +6 -1 0 -2 +1

The net profit is +7. The total number of trades is 12, so N = 12, to keep the example simple. We are not now concerned with how big the wins and losses are, but rather how many wins and losses there are and how many streaks. Therefore, we can reduce our run of trades to a simple sequence of pluses and minuses. Note that a trade with a P&L of 0 is regarded as a loss. We now have:

- + + - + - + + - - - +

As can be seen, there are 6 profits and 6 losses; therefore, X = 2*6*6 = 72. As can also be seen, there are 8 runs in this sequence; therefore, R = 8. We define a run as anytime you encounter a sign change when reading the sequence as just shown from left to right (i.e., chronologically). Assume also that you start at 1.

1. You would thus count this sequence as follows:

- + + - + - + + - - - +

1 2 3 4 5 6 7 8

2. Solve the expression: N*(R-.5)-X

For our example this would be: 12*(8-5)-72

12*7.5-72

90-72

18

3. Solve the expression: (X*(X-N))/(N-1)

For our example this would be: (72*(72-12))/(12-1)

(72*60)/11

4320/11

392.727272

4. Take the square root of the answer in number 3. For our example this would be:

392.727272^(l/2) = 19.81734777

5. Divide the answer in number 2 by the answer in number 4. This is your Z score. For our example this would be:

18/19.81734777 = .9082951063

6. Now convert your Z score to a confidence limit. The distribution of runs is binomially distributed. However, when there are 30 or more trades involved, we can use the Normal Distribution to very closely approximate the binomial probabilities. Thus, if you are using 30 or more trades, you can simply convert your Z score to a confidence limit based upon Equation for 2-tailed probabilities in the Normal Distribution.

The runs test will tell you if your sequence of wins and losses contains more or fewer streaks than would ordinarily be expected in a truly random sequence, one that has no dependence between trials. Since we are at such a relatively low confidence limit in our example, we can assume that there is no dependence between trials in this particular sequence. If your Z score is negative, simply convert it to positive when finding your confidence limit. A negative Z score implies positive dependency, meaning fewer streaks than the Normal Probability Function would imply and hence that wins beget wins and losses beget losses. A positive Z score implies negative dependency, meaning more streaks than the Normal Probability Function would imply and hence that wins beget losses and losses beget wins.

What would an acceptable confidence limit be? Statisticians generally recommend selecting a confidence limit at least in the high nineties. Some statisticians recommend a confidence limit in excess of 99% in order to assume dependency, some recommend a less stringent minimum of 95.45%. Rarely, if ever, will you find a system that shows confidence limits in excess of 95.45%. Most frequently the confidence limits encountered are less than 90%. Even if you find a system with a confidence limit between 90 and 95.45%, this is not exactly a nugget of gold. To assume that there is dependency involved that can be capitalized upon to make a substantial difference, you really need to exceed 95.45% as a bare minimum.

As long as the dependency is at an acceptable confidence limit, you can alter your behavior accordingly to make better trading decisions, even though you do not understand the underlying cause of the dependency. If you could know the cause, you could then better estimate when the dependency was in effect and when it was not, as well as when a change in the degree of dependency could be expected. So far, we have only looked at dependency from the point of view of whether the last trade was a winner or a loser. We are trying to determine if the sequence of wins and losses exhibits dependency or not.

The runs test for dependency automatically takes the percentage of wins and losses into account. However, in performing the runs test on runs of wins and losses, we have accounted for the sequence of wins and losses but not their size. In order to have true independence, not only must the sequence of the wins and losses be independent, the sizes of the wins and losses within the sequence must also be independent. It is possible for the wins and losses to be independent, yet their sizes to be dependent (or vice versa). One possible solution is to run the runs test on only the winning trades, segregating the runs in some way, and then look for dependency among the size of the winning trades. Then do this for the losing trades.

**SERIAL CORRELATION**

There is a different, perhaps better, way to quantify this possible dependency between the size of the wins and losses. The technique to be discussed next looks at the sizes of wins and losses from an entirely different perspective mathematically than the does runs test, and hence, when used in conjunction with the runs test, measures the relationship of trades with more depth than the runs test alone could provide. This technique utilizes the linear correlation coefficient, r, sometimes called Pearson's r, to quantify the dependency/independency relationship. It depicts two sequences that are perfectly correlated with each other. We call this effect positive correlation.

*Figure Positive correlation (r = +1.00).*

*Figure Negative correlation (r = -1 .00).*

It shows two sequences that are perfectly negatively correlated with each other. When one line is zigging the other is zagging. We call this effect negative correlation. The formula for finding the linear correlation coefficient, r, between two sequences, X and Y, is as follows (a bar over a variable means the arithmetic mean of the variable):

(1.02) R = (∑a(Xa-X[])*(Ya-Y[]))/((∑a(Xa-X[])^2)^(1/2)*(∑a(Ya-Y[])^2)^(l/2))

Here is how to perform the calculation:

7. Average the X's and the Y's (shown as X[] and Y[]).

8. For each period find the difference between each X and the average X and each Y and the average Y.

9. Now calculate the numerator. To do this, for each period multiply the answers from step 2-in other words, for each period multiply together the differences between that period's X and the average X and between that period's Y and the average Y.

10. Total up all of the answers to step 3 for all of the periods. This is the numerator.

11. Now find the denominator. To do this, take the answers to step 2 for each period, for both the X differences and the Y differences, and square them.

12. Sum up the squared X differences for all periods into one final total. Do the same with the squared Y differences.

13. Take the square root to the sum of the squared X differences you just found in step 6. Now do the same with the Y's by taking the square root of the sum of the squared Y differences.

14. Multiply together the two answers you just found in step 1 - that is, multiply together the square root of the sum of the squared X differences by the square root of the sum of the squared Y differences. This product is your denominator.

15. Divide the numerator you found in step 4 by the denominator you found in step 8. This is your linear correlation coefficient, r.

The value for r will always be between +1.00 and -1.00. A value of 0 indicates no correlation whatsoever. It represents the following sequence of 21 trades:

1, 2, 1, -1, 3, 2, -1, -2, -3, 1, -2, 3, 1, 1, 2, 3, 3, -1, 2, -1, 3

We can use the linear correlation coefficient in the following manner to see if there is any correlation between the previous trade and the current trade. The idea here is to treat the trade P&L's as the X values in the formula for r. Superimposed over that we duplicate the same trade P&L's, only this time we skew them by 1 trade and use these as the Y values in the formula for r. In other words, the Y value is the previous X value.

*Figure Individual outcomes of 21 trades skewed by 1 trade.*

The averages differ because you only average those X's and Y's that have a corresponding X or Y value, so the last Y value (3) is not figured in the Y average nor is the first X value (1) figured in the x average.

The numerator is the total of all entries in column E (0.8). To find the denominator, we take the square root of the total in column F, which is 8.555699, and we take the square root to the total in column G, which is 8.258329, and multiply them together to obtain a denominator of 70.65578. We now divide our numerator of 0.8 by our denominator of 70.65578 to obtain .011322. This is our linear correlation coefficient, r. The linear correlation coefficient of .011322 in this case is hardly indicative of anything, but it is pretty much in the range you can expect for most trading systems. High positive correlation (at least .25) generally suggests that big wins are seldom followed by big losses and vice versa.

Negative correlation readings (below -.25 to -.30) imply that big losses tend to be followed by big wins and vice versa. The correlation coefficients can be translated, by a technique known as Fisher's Z transformation, into a confidence level for a given number of trades. Negative correlation is just as helpful as positive correlation. For example, if there appears to be negative correlation and the system has just suffered a large loss, we can expect a large win and would therefore have more contracts on than we ordinarily would. If this trade proves to be a loss, it will most likely not be a large loss (due to the negative correlation). Finally, in determining dependency you should also consider out-of-sample tests.

That is, break your data segment into two or more parts. If you see dependency in the first part, then see if that dependency also exists in the second part, and so on. This will help eliminate cases where there appears to be dependency when in fact no dependency exists. Using these two tools (the runs test and the linear correlation coefficient) can help answer many of these questions. However, they can only answer them if you have a high enough confidence limit and/or a high enough correlation coefficient. Most of the time these tools are of little help, because all too often the universe of futures system trades is dominated by independency.

If you get readings indicating dependency, and you want to take advantage of it in your trading, you must go back and incorporate a rule in your trading logic to exploit the dependency. In other words, you must go back and change the trading system logic to account for this dependency. Thus, we can state that if dependency shows up in your trades, you haven't maximized your system. In other words, dependency, if found, should be exploited until it no longer appears to exist. The first stage in money management is therefore to exploit, and hence remove, any dependency in trades. We have been discussing dependency in the stream of trade profits and losses. You can also look for dependency between an indicator and the subsequent trade, or between any two variables.

**COMMON DEPENDENCY ERRORS**

As traders we must generally assume that dependency does not exist in the marketplace for the majority of market systems. That is, when trading a given market system, we will usually be operating in an environment where the outcome of the next trade is not predicated upon the outcome(s) of prior trade(s). That is not to say that there is never dependency between trades for some market systems, only that we should act as though dependency does not exist unless there is very strong evidence to the contrary. Such would be the case if the Z score and the linear correlation coefficient indicated dependency, and the dependency held up across markets and across optimizable parameter values. If we act as though there is dependency when the evidence is not overwhelming, we may well just be fooling ourselves and causing more self-inflicted harm than good as a result.

Even if a system showed dependency to a 95% confidence limit for all values of a parameter, it still is hardly a high enough confidence limit to assume that dependency does in fact exist between the trades of a given market or system. A type I error is committed when we reject an hypothesis that should be accepted. If, however, we accept an hypothesis when it should be rejected, we have committed a type II error. Absent knowledge of whether an hypothesis is correct or not, we must decide on the penalties associated with a type I and type II error. Sometimes one type of error is more serious than the other, and in such cases we must decide whether to accept or reject an unproven hypothesis based on the lesser penalty.

Suppose you are considering using a certain trading system, yet you're not extremely sure that it will hold up when you go to trade it real-time. Here, the hypothesis is that the trading system will hold up real-time. You decide to accept the hypothesis and trade the system. If it does not hold up, you will have committed a type II error, and you will pay the penalty in terms of the losses you have incurred trading the system real-time. On the other hand, if you choose to not trade the system, and it is profitable, you will have committed a type I error. In this instance, the penalty you pay is in forgone profits. Which is the lesser penalty to pay? Clearly it is the latter, the for-gone profits of not trading the system.

Although from this example you can conclude that if you're going to trade a system real-time it had better be profitable, there is an ulterior motive for using this example. If we assume there is dependency, when in fact there isn't, we will have committed a type 'II error. Again, the penalty we pay will not be in forgone profits, but in actual losses. However, if we assume there is not dependency when in fact there is, we will have committed a type I error and our penalty will be in forgone profits. Clearly, we are better off paying the penalty of forgone profits than undergoing actual losses. Therefore, unless there is absolutely overwhelming evidence of dependency, you are much better off assuming that the profits and losses in trading are independent of prior outcomes.

There seems to be a paradox presented here. First, if there is dependency in the trades, then the system is 'suboptimal. Yet dependency can never be proven beyond a doubt. Now, if we assume and act as though there is dependency, we have committed a more expensive error than if we assume and act as though dependency does not exist. For instance, suppose we have a system with a history of 60 trades, and suppose we see dependency to a confidence level of 95% based on the runs test. We want our system to be optimal, so we adjust its rules accordingly to exploit this apparent dependency. After we have done so, say we are left with 40 trades, and dependency no longer is apparent. We are therefore satisfied that the system rules are optimal. These 40 trades will now have a higher optimal f than the entire 60.

If you go and trade this system with the new rules to exploit the dependency, and the higher concomitant optimal f, and if the dependency is not present, your performance will be closer to that of the 60 trades, rather than the superior 40 trades. Thus, the f you have chosen will be too far to the right, resulting in a big price to pay on your part for assuming dependency. If dependency is there, then you will be closer to the peak of the f curve by assuming that the dependency is there. Had you decided not to assume it when in fact there was dependency, you would tend to be to the left of the peak of the f curve, and hence your performance would be suboptimal. In a nutshell, look for dependency. If it shows to a high enough degree across parameter values and markets for that system, then alter the system rules to capitalize on the dependency. Otherwise, in the absence of overwhelming statistical evidence of dependency, assume that it does not exist.

**MATHEMATICAL EXPECTATION**

By the same token, you are better off not to trade unless there is absolutely overwhelming evidence that the market system you are contemplating trading will be profitable-that is, unless you fully expect the market system in question to have a positive mathematical expectation when you trade it realtime. Mathematical expectation is the amount you expect to make or lose, on average, each bet. In gambling parlance this is sometimes known as the player's edge or the house's advantage :

(1.03) Mathematical Expectation = ∑[i = 1,N](Pi*Ai)

where,

P = Probability of winning or losing.

A = Amount won or lost.

N = Number of possible outcomes.

The mathematical expectation is computed by multiplying each possible gain or loss by the probability of that gain or loss and then summing these products together. Let's look at the mathematical expectation for a game where you have a 50% chance of winning $2 and a 50% chance of losing $1 under this formula:

Mathematical Expectation = (.5*2)+(.5*(-1)) = 1+(-5) = .5

In such an instance, of course, your mathematical expectation is to win 50 cents per toss on average. Consider betting on one number in roulette, where your mathematical expectation is:

ME = ((1/38)*35)+((37/38)*(-1))

= (.02631578947*35)+(.9736842105*(-1))

= (9210526315)+(-.9736842105)

= -.05263157903

Here, if you bet $1 on one number in roulette (American double-zero) you would expect to lose, on average, 5.26 cents per roll. If you bet $5, you would expect to lose, on average, 26.3 cents per roll. Notice that different amounts bet have different mathematical expectations in terms of amounts, but the expectation as a percentage of the amount bet is always the same. The player's expectation for a series of bets is the total of the expectations for the individual bets. So if you go play $1 on a number in roulette, then $10 on a number, then $5 on a number, your total expectation is:

ME = (-.0526*1)+(-.0526*10)+(-.0526*5) = -.0526-.526 .263 = -.8416

You would therefore expect to lose, on average, 84.16 cents. This principle explains why systems that try to change the sizes of their bets relative to how many wins or losses have been seen (assuming an independent trials process) are doomed to fail. The summation of negative expectation bets is always a negative expectation! The most fundamental point that you must understand in terms of money management is that in a negative expectation game, there is no money-management scheme that will make you a winner.

If you continue to bet, regardless of how you manage your money, it is almost certain that you will be a loser, losing your entire stake no matter how large it was to start. This axiom is not only true of a negative expectation game, it is true of an even-money game as well. Therefore, the only game you have a chance at winning in the long run is a positive arithmetic expectation game. Then, you can only win if you either always bet the same constant bet size or bet with an f value less than the f value corresponding to the point where the geometric mean HPR is less than or equal to 1.

This axiom is true only in the absence of an upper absorbing barrier. For example, let's assume a gambler who starts out with a $100 stake who will quit playing if his stake grows to $101. This upper target of $101 is called an absorbing barrier. Let's suppose our gambler is always betting $1 per play on red in roulette. Thus, he has a slight negative mathematical expectation. The gambler is far more likely to see his stake grow to $101 and quit than he is to see his stake go to zero and be forced to quit. If, however, he repeats this process over and over, he will find himself in a negative mathematical expectation. If he intends on playing this game like this only once, then the axiom of going broke with certainty, eventually, does not apply.

The difference between a negative expectation and a positive one is the difference between life and death. It doesn't matter so much how positive or how negative your expectation is; what matters is whether it is positive or negative. So before money management can even be considered, you must have a positive expectancy game. If you don't, all the money management in the world cannot save you1. On the other hand, if you have a positive expectation, you can, through proper money management, turn it into an exponential growth function. It doesn't even matter how marginally positive the expectation is! In other words, it doesn't so much matter how profitable your trading system is on a 1 contract basis, so long as it is profitable, even if only marginally so.

If you have a system that makes $10 per contract per trade, you can use money management to make it be far more profitable than a system that shows a $1,000 average trade. What matters, then, is not how profitable your system has been, but rather how certain is it that the system will show at least a marginal profit in the future. Therefore, the most important preparation a trader can do is to make as certain as possible that he has a positive mathematical expectation in the future. The key to ensuring that you have a positive mathematical expectation in the future is to not restrict your system's degrees of freedom. You want to keep your system's degrees of freedom as high as possible to ensure the positive mathematical expectation in the future.

This is accomplished not only by eliminating, or at least minimizing, the number of optimizable parameters, but also by eliminating, or at least minimizing, as many of the system rules as possible. Every parameter you add, every rule you add, every little adjustment and qualification you add to your system diminishes its degrees of freedom. Ideally, you will have a system that is very primitive and simple, and that continually grinds out marginal profits over time in almost all the different markets. Again, it is important that you realize that it really doesn't matter how profitable the system is, so long as it is profitable. The money you will make trading will be made by how effective the money management you employ is.

The trading system is simply a vehicle to give you a positive mathematical expectation on which to use money management. Systems that work on only one or a few markets, or have different rules or parameters for different markets, probably won't work real-time for very long. The problem with most technically oriented traders is that they spend too much time and effort hating the computer crank out run after run of different rules and parameter values for trading systems. This is the ultimate "woulda, shoulda, coulda" game. It is completely counterproductive. Rather than concentrating your efforts and computer time toward maximizing your trading system profits, direct the energy toward maximizing the certainty level of a marginal profit.

**TO REINVEST TRADING PROFITS OR NOT**

Let's call the following system "System A." In it we have 2 trades: the first making SO%, the second losing 40%. If we do not reinvest our returns, we make 10%. If we do reinvest, the same sequence of trades loses 10%.

Now let's look at System B, a gain of 15% and a loss of 5%, which also nets out 10% over 2 trades on a nonreinvestment basis, just like System A. But look at the results of System B with reinvestment: Unlike system A, it makes money.

An important characteristic of trading with reinvestment that must be realized is that reinvesting trading profits can turn a winning system into a losing system but not vice versa! A winning system is turned into a losing system in trading with reinvestment if the returns are not consistent enough.

Changing the order or sequence of trades does not affect the final outcome. This is not only true on a nonreinvestment basis, but also true on a reinvestment basis.

As can obviously be seen, the sequence of trades has no bearing on the final outcome, whether viewed on a reinvestment or a nonreinvestment basis. By inspection it would seem you are better off trading on a nonreinvestment basis than you are reinvesting because your probability of winning is greater. However, this is not a valid assumption, because in the real world we do not withdraw all of our profits and make up all of our losses by depositing new cash into an account. Further, the nature of investment or trading is predicated upon the effects of compounding.

If we do away with compounding, we can plan on doing little better in the future than we can today, no matter how successful our trading is between now and then. It is compounding that takes the linear function of account growth and makes it a geometric function. If a system is good enough, the profits generated on a reinvestment basis will be far greater than those generated on a nonreinvestment basis, and that gap will widen as time goes by. If you have a system that can beat the market, it doesn't make any sense to trade it in any other way than to increase your amount wagered as your stake increases.

**MEASURING A GOOD SYSTEM FOR REINVESTMENT THE GEOMETRIC MEAN**

So far we have seen how a system can be sabotaged by not being consistent enough from trade to trade. Does this mean we should close up and put our money in the bank? Let's go back to System A, with its first 2 trades. For the sake of illustration we are going to add two winners of 1 point each.

**System A**

**No Reinvestment With Reinvestment**

Now let's take System B and add 2 more losers of 1 point each.

Now, if consistency is what we're really after, let's look at a bank account, the perfectly consistent vehicle, paying 1 point per period. We'll call this series System C.

Our aim is to maximize our profits under reinvestment trading. With that as the goal, we can see that our best reinvestment sequence comes from System B. How could we have known that, given only information regarding nonreinvestment trading? By percentage of winning trades? By total dollars? By average trade? The answer to these questions is "no," because answering "yes" would have us trading System A. What if we opted for most consistency? How about highest risk/reward or lowest draw down? These are not the answers either. If they were, we should put our money in the bank and forget about trading.

System B has the tight mix of profitability and consistency. Systems A and C do not. That is why System B performs the best under reinvestment trading. What is the best way to measure this "right mix"? It turns out there is a formula that will do just that-the geometric mean. This is simply the Nth root of the Terminal Wealth Relative (TWR), where N is the number of periods (trades). The TWR is simply what we've been computing when we figure what the final cumulative amount is under reinvestment, In other words, the TWRs for the three systems we just saw are:

System TWR

System A .91809

System B 1.070759

System C 1.040604

Since there are 4 trades in each of these, we take the TWRs to the 4th root to obtain the geometric mean:

System Geometric Mean

System A 0. 978861

System B 1.017238

System C 1.009999

(1.04) TWR = ∏[i = 1,N]HPRi

(1.05) Geometric Mean = TWR^(1/N)

where,

N = Total number of trades.

HPR = Holding period returns.

TWR = The number of dollars of value at the end of a run of periods/bets/trades per dollar of initial investment, assuming gains and losses are allowed to compound.

Here is another way of expressing these variables:

(1.06) TWR = Final Stake/Starting Stake

The geometric mean (G) equals your growth factor per play, or:

(1.07) G = (Final Stake/Starting Stake)^(I/Number of Plays)

Think of the geometric mean as the "growth factor per play" of your stake. The system or market with the highest geometric mean is the system or market that makes the most profit trading on a reinvestment of returns basis. A geometric mean less than one means that the system would have lost money if you were trading it on a reinvestment basis. Investment performance is often measured with respect to the dispersion of returns. Measures such as the Sharpe ratio, Treynor measure, Jensen measure, Vami, and so on, attempt to relate investment performance to dispersion.

The geometric mean here can be considered another of these types of measures. However, unlike the other measures, the geometric mean measures investment performance relative to dispersion in the same mathematical form as that in which the equity in your account is affected. Equation bears out another point. If you suffer an HPR of 0, you will be completely wiped out, because anything multiplied by zero equals zero. Any big losing trade will have a very adverse effect on the TWR, since it is a multiplicative rather than additive function. Thus we can state that in trading you are only as smart as your dumbest mistake.

**HOW BEST TO REINVEST**

Thus far we have discussed reinvestment of returns in trading whereby we reinvest 100% of our stake on all occasions. Although we know that in order to maximize a potentially profitable situation we must use reinvestment, a 100% reinvestment is rarely the wisest thing to do. Take the case of a fair bet (50/50) on a coin toss. Someone is willing to pay you $2 if you win the toss but will charge you $1 if you lose. Our mathematical expectation is .5. In other words, you would expect to make 50 cents per toss, on average. This is true of the first toss and all subsequent tosses, provided you do not step up the amount you are wagering. But in an independent trials process this is exactly what you should do. As you win you should commit more and more to each toss. Suppose you begin with an initial stake of one dollar. Now suppose you win the first toss and are paid two dollars.

Since you had your entire stake ($1) riding on the last bet, you bet your entire stake (now $3) on the next toss as well. However, this next toss is a loser and your entire $3 stake is gone. You have lost your original $1 plus the $2 you had won. If you had won the last toss, it would have paid you $6 since you had three $1 bets on it. The point is that if you are betting 100% of your stake, you'll be wiped out as soon as you encounter a losing wager, an inevitable event. If we were to replay the previous scenario and you had bet on a nonreinvestment basis (i.e., constant bet size) you would have made $2 on the first bet and lost $1 on the second. You would now be net ahead $1 and have a total stake of $2. Somewhere between these two scenarios lies the optimal betting approach for a positive expectation. However, we should first discuss the optimal betting strategy for a negative expectation game.

When you know that the game you are playing has a negative mathematical expectation, the best bet is no bet. Remember, there is no money-management strategy that can turn a losing game into a winner. 'However, if you must bet on a negative expectation game, the next best strategy is the maximum boldness strategy. In other words, you want to bet on as few trials as possible. The more trials, the greater the likelihood that the positive expectation will be realized, and hence the greater the likelihood that betting on the negative expectation side will lose. Therefore, the negative expectation side has a lesser and lesser chance of losing as the length of the game is shortened - i.e., as the number of trials approaches 1. If you play a game whereby you have a 49% chance of winning $1 and a 51% of losing $1, you are best off betting on only 1 trial.

The more trials you bet on, the greater the likelihood you will lose, with the probability of losing approaching certainty as the length of the game approaches infinity. That isn't to say that you are in a positive expectation for the 1 trial, but you have at least minimized the probabilities of being a loser by only playing 1 trial. Return now to a positive expectation game. We determined at the outset of this discussion that on any given trade, the quantity that a trader puts on can be expressed as a factor, f, between 0 and 1, that represents the trader's quantity with respect to both the perceived loss on the next trade and the trader's total equity. If you know you have an edge over N bets but you do not know which of those N bets will be winners (and for how much), and which will be losers (and for how much), you are best off (in the long run) treating each bet exactly the same in terms of what percentage of your total stake is at risk.

This method of always trading a fixed fraction of your stake has shown time and again to be the best staking system. If there is dependency in your trades, where winners beget winners and losers beget losers, or vice versa, you are still best off betting a fraction of your total stake on each bet, but that fraction is no longer fixed. In such a case, the fraction must reflect the effect of this dependency. "Wait," you say. "Aren't staking systems foolish to begin with? Haven't we seen that they don't overcome the house advantage, they only increase our total action?" This is absolutely true for a situation with a negative mathematical expectation. For a positive mathematical expectation, it is a different story altogether. In a positive expectancy situation the trader/gambler is faced with the question of how best to exploit the positive expectation.

**OPTIMAL FIXED FRACTIONAL TRADING**

We have spent the course of this discussion laying the groundwork for this section. We have seen that in order to consider betting or trading a given situation or system you must first determine if a positive mathematical expectation exists. We have seen that what is seemingly a "good bet" on a mathematical expectation basis may in fact not be such a good bet when you consider reinvestment of returns, if you are reinvesting too high a percentage of your winnings relative to the dispersion of outcomes of the system. Reinvesting returns never raises the mathematical expectation. If there is in fact a positive mathematical expectation, however small, the next step is to exploit this positive expectation to its fullest potential. For an independent trials process, this is achieved by reinvesting a fixed fraction of your total stake. 2

And how do we find this optimal f? Much work has been done in recent decades on this topic in the gambling community, the most famous and accurate of which is known as the Kelly Betting System. This is actually an application of a mathematical idea developed in early 1956 by John L. Kelly, Jr.3 The Kelly criterion states that we should bet that fixed fraction of our stake (f) which maximizes the growth function G(f):

(1.08) G(f) = P*ln(l+B*f)+(1 -P)*ln(l-f)

where,

f = The optimal fixed fraction.

P = The probability of a winning bet or trade.

B = The ratio of amount won on a winning bet to amount lost on a losing bet.

ln() = The natural logarithm function.

As it turns out, for an event with two possible outcomes, this optimal f4 can be found quite easily with the Kelly formulas.

**KELLY FORMULAS**

Beginning around the late 1940s, Bell System engineers were working on the problem of data transmission over long-distance lines. The problem facing them was that the lines were subject to seemingly random, unavoidable "noise" that would interfere with the transmission. Some rather ingenious solutions were proposed by engineers at Bell Labs. Oddly enough, there are great similarities between this data communications problem and the problem of geometric growth as pertains to gambling money management. One of the outgrowths of these solutions is the first Kelly formula. The first equation here is:

(1.09a) f = 2*P-l

or

(1.09b) f = P-Q

where,

f = The optimal fixed fraction.

P = The probability of a winning bet or trade.

Q = The probability of a loss, (or the complement of P, equal to 1-P).

Both forms of Equation are equivalent. Equation (l.09a) or (1.09b) will yield the correct answer for optimal f provided the quantities are the same for both wins and losses. As an example, consider the following stream of bets:

-1, +1, +1,-1,-1, +1, +1, +1, +1,-1

There are 10 bets, 6 winners, hence:

f = (.6*2)-l = 1.2-1 = .2

If the winners and losers were not all the same size, then this formula would not yield the correct answer. Such a case would be our two-to-one coin-toss example, where all of the winners were for 2 units and all of the losers for 1 unit. For this situation the Kelly formula is:

(1.10a) f = ((B+1)*P-1)/B

where,

f = The optimal fixed fraction.

P = The probability of a winning bet or trade.

B = The ratio of amount won on a winning bet to amount lost on a losing bet.

In our two-to-one coin-toss example:

f = ((2+ l).5-l)/2

= (3*.5-l)/2

= (1.5 -l)/2

= .5/2

= .25

This formula will yield the correct answer for optimal f provided all wins are always for the same amount and all losses are always for the same amount. If this is not so, then this formula will not yield the correct answer.

The Kelly formulas are applicable only to outcomes that have a Bernoulli distribution. A Bernoulli distribution is a distribution with two possible, discrete outcomes. Gambling games very often have a Bernoulli distribution. The two outcomes are how much you make when you win, and how much you lose when you lose. Trading, unfortunately, is not this simple. To apply the Kelly formulas to a non-Bernoulli distribution of outcomes is a mistake. The result will not be the true optimal f. Consider the following sequence of bets/trades:

+9, +18, +7, +1, +10, -5, -3, -17, -7

Since this is not a Bernoulli distribution (the wins and losses are of different amounts), the Kelly formula is not applicable. However, let's try it anyway and see what we get. Since 5 of the 9 events are profitable, then P = .555. Now let's take averages of the wins and losses to calculate B (here is where so many traders go wrong). The average win is 9, and the average loss is 8. Therefore we say that B = 1.125. Plugging in the values we obtain:

f = ((1.125+1) .555-1)/1.125

= (2.125*.555-1)/1.125

= (1.179375-1)/1.125

= .179375/1.125

= .159444444

So we say f = .16. You will see that this is not the optimal f. The optimal f for this sequence of trades is .24. Applying the Kelly formula when all wins are not for the same amount and/or all losses are not for the same amount is a mistake, for it will not yield the optimal f.

Notice that the numerator in this formula equals the mathematical expectation for an event with two possible outcomes as defined earlier. Therefore, we can say that as long as all wins are for the same amount and all losses are for the same amount (whether or not the amount that can be won equals the amount that can be lost), the optimal f is:

(1.10b) f = Mathematical Expectation/B

where,

f = The optimal fixed fraction.

B = The ratio of amount won on a winning bet to amount lost on a losing bet.

The mathematical expectation is defined in Equation, but since we must have a Bernoulli distribution of outcomes we must make certain in using Equation that we only have two possible outcomes. Equation is the most commonly seen of the forms of Equation. However, the formula can be reduced to the following simpler form:

(1.10c) f = P-Q/B

where

f = The optimal fixed fraction.

P = The probability of a winning bet or trade.

Q = The probability of a loss (or the complement of P, equal to 1-P).

**FINDING THE OPTIMAL F BY THE GEOMETRIC MEAN**

In trading we can count on our wins being for varying amounts and our losses being for varying amounts. Therefore the Kelly formulas could not give us the correct optimal f. How then can we find our optimal f to know how many contracts to have on and have it be mathematically correct? Here is the solution. To begin with, we must amend our formula for finding HPRs to incorporate f:

(1.11) HPR = 1+f*(-Trade/Biggest Loss)

where,

f = The value we are using for f.

-Trade = The profit or loss on a trade.

Biggest Loss = The P&L that resulted in the biggest loss.

And again, TWR is simply the geometric product of the HPRs and geometric mean (G) is simply the Nth root of the TWR.

(1.12) TWR = ∏[i = 1,N](1+f*(-Tradei/Biggest Loss))

(1.13) G = (∏[i = 1,N](1+f*(-Tradei/Biggest Loss))]^(1/N)

where,

f = The value we are using for f.

-Tradei = The profit or loss on the ith trade.

Biggest Loss = The P&L that resulted in the biggest loss.

N = The total number of trades.

G = The geometric mean of the HPRs.

By looping through all values for I between .01 and 1, we can find that value for f which results in the highest TWR. This is the value for f that would provide us with the maximum return on our money using fixed fraction. We can also state that the optimal f is the f that yields the highest geometric mean. It matters not whether we look for highest TWR or geometric mean, as both are maximized at the same value for f.

Doing this with a computer is easy, since both the TWR curve and the geometric mean curve are smooth with only one peak. You simply loop from f = .01 to f = 1.0 by .01. As soon as you get a TWR that is less than the previous TWR, you know that the f corresponding to the previous TWR is the optimal f. You can employ many other search algorithms to facilitate this process of finding the optimal f in the range of 0 to 1. One of the fastest ways is with the parabolic interpolation search procedure detailed in portfolio Management Formulas.

## Comments

## Post a Comment