### Characteristics of Fixed Fractional Trading and Salutary Technique- COMPARING TRADING SYSTEMS

Characteristics of Fixed Fractional Trading and Salutary Technique

We have seen that two trading systems can be compared on the basis of their geometric means at their respective optimal fs. Further, we can compare systems based on how high their optimal fs themselves are, with the higher optimal f being the riskier system. This is because the least the drawdown may have been is at least an f percent equity retracement. So, there are two basic measures for comparing systems, the geometric means at the optimal fs, with the higher geometric mean being the superior system, and the optimal fs themselves, with the lower optimal f being the superior system. Thus, rather than having a single, one-dimensional measure of system performance, we see that performance must be measured on a two-dimensional plane, one axis being the geometric mean, the other being the value for f itself.

The higher the geometric mean at the optimal f, the better the system, Also, the lower the optimal f, the better the system. Geometric mean does not imply anything regarding drawdown. That is, a higher geometric mean does not mean a higher (or lower) draw-down. The geometric mean only pertains to return. The optimal f is the measure of minimum expected historical drawdown as a percentage of equity retracement. A higher optimal f does not mean a higher (or lower) return. We can also use these benchmarks to compare a given system at a fractional f value and another given system at its full optimal f value. Therefore, when looking at systems, you should look at them in terms of how high their geometric means are and what their optimal fs are.

For example, suppose we have System A, which has a 1.05 geometric mean and an optimal f of .8. Also, we have System B, which has a geometric mean of 1.025 and an optimal f of .4. System A at the half f level will have the same minimum historical worst-case equity retracement (drawdown) of 40%, just as System B's at full f, but System A's geometric mean at half f will still be higher than System B's at the full f amount. Therefore, System A is superior to System B. "Wait a minute," you say, "I thought the only thing that mattered was that we had a geometric mean greater than 1, that the system need be only marginally profitable, that we can make all the money we want through money management!" That's still true. However, the rate at which you will make the money is still a function of the geometric mean at the f level you are employing.

The expected variability will be a function of how high the f you are using is. So, although it's true that you must have a system with a geometric mean at the optimal f that is greater than 1 and that you can still make virtually an unlimited amount with such a system after enough trades, the rate of growth is dependent upon the geometric mean at the f value employed. The variability en route to that goal is also a function of the f value employed. Yet these considerations, the degree of the geometric mean and the f employed, are secondary to the fact that you must have a positive mathematical expectation, although they are useful in comparing two systems or techniques that have positive mathematical expectations and an equal confidence of their working in the future.

TOO MUCH SENSIVITY TO THE BIGGEST LOSS

A recurring criticism with the entire approach of optimal f is that it is too dependent on the biggest losing trade. This seems to be rather disturbing to many traders. They argue that the amount of contracts you put on today should not be so much a function of a single bad trade in the past. Numerous different algorithms have been worked up by people to alleviate this apparent oversensitivity to the largest loss. Many of these algorithms work by adjusting the largest loss upward or downward tom make the largest loss be a function of the current volatility in the market. The relationship seems to be a quadratic one. That is, the absolute value of the largest loss seems to get bigger at a faster rate than the volatility. However, this is not a deterministic relationship.

That is, just because the volatility is X today does not mean that our largest loss will be X^Y. It simply means that it usually is somewhere near X^Y. If we could determine in advance what the largest possible loss would be going into today, we could then have a much better handle on our money management.2 Here again is a case where we must consider the worst-case scenario and build from there. The problem is that we do not know exactly what our largest loss can be going into today. An algorithm that can predict this is really not very useful to us because of the one time that it fails. Consider for instance the possibility of an exogenous shock occurring in a market overnight. Suppose the volatility were quite low prior to this overnight shock, and the market then went locked-limit against yplou for the next few days.

Or suppose that there were no price limits, and the market just opened an enormous amount against you the next day. These types of events are as old as commodity and stock trading itself. They can and do happen, and they are not always telegraphed in advance by increased volatility. Generally then you are better off not to "shrink" your largest historical loss to reflect a current low-volatility marketplace. Furthermore, there Is the concrete possibility of experiencing a loss larger in the future than what was the historically largest loss. There is no mandate that the largest loss seen in the past is the largest loss you can experience today.3 This is true regardless of the current volatility coming into today.

The problem is that, empirically, the f that has been optimal in the past is a function of the largest loss of the past. There's no getting around this. However, as you shall see when we get into the parametric techniques, you can budget for a greater loss in the future. In so doing, you will be prepared if the almost inevitable larger loss comes along. Rather than trying to adjust the largest loss to the current climate of a given market so that your empirical optimal f reflects the current climate, you will be much better off learning the parametric techniques. The technique that follows is a possible solution to this problem, and it can be applied whether we are deriving our optimal f empirically or, as we shall learn later, parametrically.

EQUALIZING OPTIMAL F

Optimal f will yield the greatest geometric growth on a stream of outcomes. This is a mathematical fact. Consider the hypothetical stream of outcomes:

+2, -3, +10, -5

This is a stream from which we can determine our optimal f as .17, or to bet 1 unit for every \$29.41 in equity. Doing so on such a stream will yield the greatest growth on our equity.

Consider for a moment that this stream represents the trade profits and losses on one share of stock. Optimally we should buy one share of stock for every \$29.41 that we have in account equity, regardless of what the current stock price is. But suppose the current stock price is \$100 per share. Further, suppose the stock was \$20 per share when the first two trades occurred and was \$50 per share when the last two trades occurred. Recall that with optimal f we are using the stream of past trade P&L's as a proxy for the distribution of expected trade P&L's currently.

Therefore, we can preprocess the trade P&L data to reflect this by converting the past trade P&L data to reflect a commensurate percentage gain or loss based upon the current price. For our first two trades, which occurred at a stock price of \$20 per share, the \$2 gain corresponds to a 10% gain and the \$3 loss corresponds to a 15% loss. For the last two trades, taken at a stock price of \$50 per share, the \$10 gain corresponds to a 20% gain and the \$5 loss corresponds to a 10% loss.

The formulas to convert raw trade P&L's to percentage gains and losses for longs and shorts are as follows:

P&L% = Exit Price/Entry Price-1 (for longs)
P&L% = Entry Price/Exit Price-1 (for shorts)

or we can use the following formula to convert both longs and shorts:

P&L% = P&L in Points/Entry Price

Thus, for our 4 hypothetical trades, we now have the following stream of percentage gains and losses :

+.l, -.15, +.2, -.l

We call this new stream of translated P&L's the equalized data, because it is equalized to the price of the underlying instrument when the trade occurred.

To account for commissions and slippage, you must adjust the exit price downward for an amount commensurate with the amount of the commissions and slippage. Likewise, you should adjust the exit price upward. If you are using, you must deduct the amount of the commissions and slippage from the numerator P&L in Points.

Next we determine our optimal f on these percentage gains and losses. The f that is optimal is .09. We must now convert this optimal f of .09 into a dollar amount based upon the current stock price. This is accomplished by the following formula:

f\$ = Biggest % Loss*Current Price*\$ per Point/-f

Thus, since our biggest percentage loss was -.15, the current price is \$100 per share, and the number of dollars per full point is 1, we can determine our f\$ as:

f\$ = -.15*100*1/-.09 = -15/-.09 = 166.67

Thus, we would optimally buy 1 share for every \$166.67 in account equity. If we used 100 shares as our unit size, the only variable affected would have been the number of dollars per full point, which would have been 100. The resulting f\$ would have been \$16,666.67 in equity for every 100 shares.

Suppose now that the stock went down to \$3 per share. Our f\$ equation would be exactly the same except for the current price variable which would now be 3. Thus, the amount to finance 1 share by becomes:

f\$ = -.15*3*1/-.09 = -.45/-.09 = 5

We optimally would buy 1 share for every \$5 we had in account equity. Notice that the optimal f does not change with the current price of the stock. It remains at .09. However, the f\$ changes continuously as the price of the stock changes. This doesn't mean that you must alter a position you are already in on a daily basis, but it does make it more likely to be beneficial that you do so. As an example, if you are long a given stock and it declines, the dollars that you should allocate to 1 unit of this stock will decline as well, with the optimal f determined off of equalized data. If your optimal f is determined off of the raw trade P&L data, it will not decline. In both cases, your daily equity is declining. Using the equalized optimal f makes it more likely that adjusting your position size daily will be beneficial.

The geometric average trade changes as well. Recall Equation for the geometric average trade:

GAT = G*(Biggest Loss/-f)

where,

G = Geometric mean 1.
f = Optimal fixed fraction.

This equation is the equivalent of:

GAT = (geometric mean-1)*f\$

We have already obtained a new geometric mean by equalizing the past data. The f\$ variable, which is constant when we do not equalizethe past data, now changes continuously, as it is a function of the current underlying price. Hence our geometric average trade changes continuously as the price of the underlying instrument changes.

Our threshold to the geometric also must be changed to reflect the equalized data. Recall Equation for the threshold to the geometric:

T = AAT/GAT*Biggest Loss/-f

where,

T = The threshold to the geometric.
AAT = The arithmetic average trade.
GAT = The geometric average trade.
f = The optimal f (0 to 1).

This equation can also be rewritten as: T = AAT/GAT*f\$

Now, not only do the AAT and GAT variables change continuously as the price of the underlying changes, so too does the f\$ variable.

Finally, when putting together a portfolio of market systems we must figure daily HPRs. These too are a function of f\$:

Daily HPR = D\$/f\$+1

where,

D\$ = The dollar gain or loss on 1 unit from the previous day. This is equal to (Tonight's Close-Last Night's Close)*Dollars per Point.
f\$ = The current optimal fin dollars. Here, however, the current price variable is last night's close.

For example, suppose a stock tonight closed at \$99 per share. Last night it was \$102 per share. Our biggest percentage loss is -15. If our f is .09 then our f\$ is:

f\$ = -.15*102 *1/-.09
= -15.3/-.09
= 170

Since we are dealing with only 1 share, our dollars per point value is \$1. We can now determine our daily HPR for today as:

Daily HPR = (99-102)*1/170+1
= -3/170+1
= -.01764705882+1
= .9823529412

Return now to what was said at the outset of this discussion. Given a stream of trade P&L's, the optimal f will make the greatest geometric growth on that stream. We use the stream of trade P&L's as a proxy for the distribution of possible outcomes on the next trade. Along this line of reasoning, it may be advantageous for us to equalize the stream of past trade profits and losses to be what they would be if they were performed at the current market price. In so doing, we may obtain a more realistic proxy of the distribution of potential trade profits and losses on the next trade. Therefore, we should figure our optimal f from this adjusted distribution of trade profits and losses.

This does not mean that we would have made more by using the optimal f off of the equalized data. We would not have, as the following demonstration shows:

However, if all of the trades were figured off of the current price, the equalized optimal f would have made more than the raw optimal f. Which then is the better to use? Should we equalize our data and determine our optimal f, or should we just run everything as it is? This is more a matter of your beliefs than it is mathematical fact. It is a matter of what is more pertinent in the item you are trading, percentage changes or absolute changes. Is a \$2 move in a \$20 stock the same as a \$10 move in a \$100 stock? What if we are discussing dollars and deutsche marks? Is a 30-point move at .4500 the same as a .40-point move at .6000? My personal opinion is that you are probably better off with the equalized data. Often the matter is moot, in that if a stock has moved from \$20 per share to \$100 per share and we want to determine the optimal f, we want to use current data.

The trades that occurred at \$20 per share may not be representative of the way the stock is presently trading regardless of whether they are equalized or not. Generally, then, you are better off not using data where the underlying was at a dramatically different price than it presently is, as the characteristics of the way the item trades may have changed as well. In that sense, the optimal f off of the raw data and the optimal f off of the equalized data will be identical if all trades occurred at the same underlying price. So we can state that if it does matter a great deal whether you equalize your data or not, then you're probably using too much data anyway. You've gone so far into the past that the trades generated back then probably are not very representative of the next trade. In short, we can say that it doesn't much matter whether you use equalized data or not, and if it does, there's probably a problem.

If there isn't a problem, and there is a difference between using the equalized data and the raw data, you should opt for the equalized data. This does not mean that the optimal f figured off of the equalized data would have been optimal in the past. It would not have been. The optimal f figured off of the raw data would have been the optimal in the past. However, in terms of determining the as-yet-unknown answer to the question of what will be the optimal f, the optimal f figured off of the equalized data makes better sense, as the equalized data is a fairer representation of the distribution of possible outcomes on the next trade. Equations will give different answers depending upon whether the trade was initiated as a long or a short. For example, if a stock is bought at 80 and sold at 100, the percentage gain is 25.

However, if a stock is sold short at 100 and covered at 80, the gain is only 20%. In both cases, the stock was bought at 80 and sold at 100, but the sequence-the chronology of these transactions-must be accounted for. As the chronology of transactions affects the distribution of percentage gains and losses, we assume that the chronology of transactions in the future will be more like the chronology in the past than not. Thus, Equations will give different answers for longs and shorts. Of course, we could ignore the chronology of the trades, but to do so would be to reduce the information content of the trade's history. Further, the risk involved with a trade is a function of the chronology of the trade, a fact we would be forced to ignore.

DOLLAR AVERAGING AND SHARE AVERAGING IDEAS

Consider a hypothetical motorist, Joe Putzivakian, case number 286952343. Every week, he puts \$20 of gasoline into his auto, regardless of the price of gasoline that week. He always gets \$20 worth, and every week he uses the \$20 worth no matter how much or how little that buys him. When the price for gasoline is higher, it forces him to be more austere in his driving. As a result, Joe Putzivakian will have gone through life buying more gasoline when it is cheaper, and buying less when it was more expensive. He will have therefore gone through life paying a below average cost per gallon of gasoline. In other words, if you averaged the cost of a gallon of gasoline for all of the weeks of which Joe was a motorist, the average would have been higher than the average that Joe paid. Now consider his hypothetical cousin, Cecil Putzivakian, case number 286952344.

Whenever he needs gasoline, he just fills up his pickup and complains about the high price of gasoline. As a result, Cecil has used a consistent amount of gas each week, and has therefore paid the average price for it throughout his motoring lifetime. Now let's suppose you are looking at a long-term investment program. You decide that you want to put money into a mutual fund to be used for your retirement many years down the road. You believe that when you retire the mutual fund will be at a much higher value than it is today. That is, you believe that in an asymptotic sense the mutual fund will be an investment that makes money. However, you do not know if it is going to go up or down over the next month, or the next year. You are absent knowledge about the nearer-term performance of the mutual fund. To cope with this, you can dollar average into the mutual fund.

Say you want to space your entry into the mutual fund over the course of two years. Further, say you have \$36,000 to invest. Therefore, every month for the next 24 months you will invest \$1,500 of this \$36,000 into the fund, until after 24 months you will be completely invested. By so doing, you have obtained a below average cost into the fund. "Average" as it is used here refers to the average price of the fund over the 24-month period during which you are investing. It doesn't necessarily mean that you will get a price that is cheaper than if you put the full \$36,000 into it today, nor does it guarantee that at the end of these 24 months of entering the fund you will show a profit on your \$36,000. The amount you have in the fund at that time may be less than the \$36,000.

What it does mean is that if you simply entered arbitrarily at some point along the next 24 months with your full \$36,000 in one shot, you would probably have ended up buying fewer mutual fund shares, and hence have paid a higher price than if you dollar averaged in. The same is true when you go to exit a mutual fund, only the exit side works with share averaging rather than dollar averaging. Say it is now time for you to retire and you have a total of 1,000 shares in this mutual fund, You don't know if this is a good time for you to be getting out or not, so you decide to take 2 years (24 months), to average out of the fund. Here's how you do it. You take the total number of shares you have (1,000) and divide it by the number of periods you want to get out over (24 months). Therefore, since 1,000/24 = 41.67, you will sell 41.67 shares every month for the next 24 months.

In so doing, you will have ended up selling your shares at a higher price than the average price over the next 24 months. Of course, this is no guarantee that you will have sold them for a higher price than you could have received for them today, nor does it guarantee that you will have sold your shares at a higher price than what you might get if you were to sell all of your shares 24 months from now. What you will get is a higher price than the average over the time period that you are averaging out over. That is guaranteed. These same principles can be applied to a trading account. By dollar averaging money into a trading account as opposed to simply "taking the plunge" at some point during the time period you are averaging over, you will have gotten into the account at a better "average price."

Absent knowledge of what the near-term equity changes in the account will be you are better off, on average, to dollar average into a trading program. Don't just rely on your gut and your nose, use the measures of dependency on the monthly equity changes of a trading program. Try to see if there is dependency in the monthly equity changes. If there is dependency to a high enough confidence level so you can plunge in at a favorable point, then do so. However, if there isn't a high enough confidence in the dependency of the monthly equity changes, then dollar average into a trading program. In so doing, you will be ahead in an asymptotic sense. The same is true for withdrawing money from an account. The wayto share average out of a trading program is to decide upon a date to start averaging out, as well as how long a period of time to average out for.

On the date when you are going to start averaging out, divide the equity in the account by 100. This gives you the value of "1 share." Now, divide 100 by the number of periods that you want to average out over. Say you want to average out of the account weekly over the next 20 weeks. That makes 20 periods. Dividing 100 by 20 gives 5. Therefore, you are going to average out of your account by 5 "shares" per week. Multiply the value you had figured for 1 share by 5, and that will tell you how much money to withdraw from your trading account this week. Now, going into next week, you must keep track of how many shares you have left. Since you got out of 5 shares last week, you are left with 95. When the time comes along for withdrawal number 2, divide the equity in your account by 95 and multiply by 5. This will give you the value of the 5 shares you are "cashing in" this week.

You will keep on doing this until you have zero shares left, at which point no equity will be left in your account. By doing this, you have probably obtained a better average price for getting out of your account than you would have received had you gotten out of the account at some arbitrary point along this 20-week withdrawal period. This principle of averaging in and out of a trading account is so simple, you have to wonder why no one ever does it. I always ask the accounts that I manage to do this. Yet I have never had anyone, to date, take me up on it. The reason is simple. The concept, although completely valid, requires discipline and time in order to work-exactly the same ingredients as those required to make the concept of optimal f work. It's one thing to understand the concepts and believe in them. It's another thing to do it.

THE ARC SINE LAWS AND RANDOM WALKS

Now we turn the discussion toward drawdowns. First, however, we need to study a little bit of theory in the way of the first and second arc sine laws. These are principles that pertain to random walks. The stream of trade P&L's that you are dealing with may not be truly random. The degree to which the stream of P&L's you are using differs from being purely random is the degree to which this discussion will not pertain to your stream of profits and losses. Generally though, most streams of trade profits and losses are nearly random as determined by the runs test and the linear correlation coefficient. Furthermore, not only do the arc sine laws assume that you know in advance what the amount that you can win or lose is, they also assume that the amount you can win is equal to the amount you can lose, and that this is always a constant amount.

In our discussion, we will assume that the amount that you can win or lose is \$1 on each play. The arc sine laws also assume that you have a 50% chance of winning and a 50% chance of losing. Thus, the arc sine laws assume a game where the mathematical expectation is 0. These caveats make for a game that is considerably different, and considerably more simple, than trading is. However, the first and second arc sine laws are exact for the game just described. To the degree that trading differs from the game just described, the arc sine laws do not apply. For the sake of learning the theory, however, we will not let these differences concern us for the moment.

Imagine a truly random sequence such as coin tossing5 where we win 1 unit when we win and we lose 1 unit when we lose. If we were to plot out our equity curve over X tosses, we could refer to a specific point (X,Y), where X represented the Xth toss and Y our cumulative gain or loss as of that toss. We define positive territory as anytime the equity curve is above the X axis or on the X axis when the previous point was above the X axis. Likewise, we define negative territory as anytime the equity curve is below the X axis or on the X axis when the previous point was below the X axis. We would expect the total number of points in positive territory to be close to the total number of points in negative territory. But this is not the case.

If you were to toss the coin N times, your probability (Prob) of spending K of the events in positive territory is:

Prob~l/(Pi*K^.5*(N-K)^.5)

where,

Pi = 3.141592654.

The symbol ~ means that both sides tend to equality in the limit. In this case, as either K or (N-K) approaches infinity, the two sides of the equation will tend toward equality.

Thus, if we were to toss a coin 10 times (N = 10) we would have the following probabilities of being in positive territory for K of the tosses:

K          Probability6
0             .14795
1             .1061
2             .0796
3             .0695
4             .065
5             .0637
6             .065
7             .0695
6             .0796
9             .1061
10           .14795

You would expect to be in positive territory for 5 of the 10 tosses, yet that is the least likely outcome! In fact, the most likely outcomes are that you will be in positive territory for all of the tosses or for none of them!

This principle is formally detailed in the first arc sine law which states:

For a Fixed A (0<A<1) and as N approaches infinity, the probability that K/N spent on the positive side is < A tends to:

Prob{(K/N)<A} = 2/Pi*ARCSIN(A^.5)

where,

Pi = 3.141592654.

Even with N as small as 20, you obtain a very close approximation for the probability. Equation, the first arc sine law, tells us that with probability .1, we can expect to see 99.4% of the time spent on one side of the origin, and with probability .2, the equity curve will spend 97.6% of the time on the same side of the origin! With a probability of .5, we can expect the equity curve to spend in excess of 85.35% of the time on the same side of the origin. That is just how perverse the equity curve of a fair coin is!

Now here is the second arc sine law, which also uses equation and hence has the same probabilities as the first arc sine law, but applies to an altogether different incident, the maximum or minimum of the equity curve. The second arc sine law states that the maximum (or minimum) point of an equity curve will most likely occur at the endpoints, and least likely at the center. The distribution is exactly the same as the amount of time spent on one side of the origin!

If you were to toss the coin N times, your probability of achieving the maximum at point K in the equity curve is also given by Equation :

Prob~l/(Pi*K^.5*(N-K)^.5) ]where Pi = 3.141592654.

Thus, if you were to toss a coin 10 times (N = 10) you would have the following probabilities of the maximum occurring on the Kth toss:

K     Probability
0         .14795
1         .1061
2         .0796
3         .0695
4         .065
5         .0637
6         .065
7         .0695
8         .0796
9         .1061
10       .14795

In a nutshell, the second arc sine law states that the maximum or minimum are most likely to occur near the endpoints of the equity curve and least likely to occur in the center.

TIME SPENT IN A DRAWDOWN

Recall the caveats involved with the arc sine laws. That is, the arc sine laws assume a 50% chance of winning, and a 50% chance of losing. Further, they assume that you win or lose the exact same amounts and that the generating stream is purely random. Trading is considerably more complicated than this. Thus, the arc sine laws don't apply in a pure sense, but they do apply in spirit. Consider that the arc sine laws worked on an arithmetic mathematical expectation of 0. Thus, with the first law, we can interpret the percentage of time on either side of the zero line as the percentage of time on either side of the arithmetic mathematical expectation.

Likewise with the second law, where, rather than looking for an absolute maximum and minimum, we were looking for a maximum above the mathematical expectation and a minimum below it. The minimum below the mathematical expectation could be greater than the maximum above it if the minimum happened later and the arithmetic mathematical expectation was a rising line rather than a horizontal line at zero. Thus, we can interpret the spirit of the arc sine laws as applying to trading in the following ways. (However, rather than imagining the important line as being a, horizontal line at zero, we should imagine a line that slopes upward at the rate of the arithmetic average trade. If we are Axed fractional trading, the line will be one that curves upward, getting ever steeper, 'at such a rate that the next point equals the current point times the geometric mean.)

We can interpret the first arc sine law as stating that we should expect to be on one side of the mathematical expectation line for far more trades than we spend on the other side of the mathematical expectation line. Regarding the second arc sine law, we should expect the maximum deviations from the mathematical expectation line, either above or below it, as being most likely to occur near the beginning or the end of the equity curve graph and least likely near the center of it. You will notice another characteristic that happens when you are trading at the optimal f levels. This characteristic concerns the length of time you spend between two equity high points.

If you are trading at the optimal f level, whether you are trading just 1 market system or a portfolio of market systems, the time of the longest drawdown7 takes to elapse is usually 35 to 55% of the total time you are looking at. This seems to be true no matter how long or short a time period you are looking at! This principle appears to hold true no matter how long or short a period we are looking at. This means that we can expect to be in the largest drawdown for approximately 35 to 55% of the trades over the life of a trading program we are employing! This is true whether we are trading 1 market system or an entire portfolio.

Therefore, we must learn to expect to be within the maximum drawdown for 35 to 55% of the life of a program that we wish to trade. Knowing this before the fact allows us to be mentally prepared to trade through it. Whether you are about to manage an account, about to have one managed by someone else, or about to trade your own account, you should bear in mind the spirit of the arc sine laws and how they work on your equity curve relative to the mathematical expectation line, along with the 35% to 55% rule. By so doing you will be tuned to reality regarding what to expect as the future unfolds.