An Intro to Quant Models: Part 2

Last time I gave you a brief introduction into what a quant does here at LFX – if you missed that opening article, I recommend reading it first here.

As part of that piece I mentioned the different models we use, the strategy (we refer to as the engine), the risk and the cost models.

So let’s break these down into a bit more detail.

The engine is, as the name suggests, the whole engine room for the system. It takes the various inputs we use along with the rule sets and applies the rules to the inputs in order to generate trading signals.

The engine is where all of our components of every possible system lives. In order for a strategy to get into the engine it needs to have passed a rigorous set of tests, as well as live testing on our test engine.

The key to these though is our various coring methods. These typically are what many quant groups would call our alpha models, although we call it ‘greedy’, after the algorithms it uses (Greedy algorithms). For those familiar with the term ‘Greedy’, yes we also do have AI algos often referred to as ‘Smart’.

So while I’m not going to give away our full scoring methods as they are part of the secret sauce, actually this is where you can start to see the complexity involved. So not only is there the need to build well thought out strategies with sound reasoning.

In other words, it’s no good just building a system that seems to do well if we have no idea why fundamentally it would do well (though note that some quants use these models and it is referred to as pure data mining but its not something we do). But you also need sound scoring methods that ensure consistency within systems.

I will give you an example, and I will take NinjaTrader’s optimiser for this. On NinjaTrader, you can optimise a strategy on various scoring methods, for instance, Total Net Profit, Sharpe Ratio, % Win Rate. However all of these come with limitations.

If I scored a system on % ¬†win rate, obviously while this would rank anything that wins a lot, it might not actually be that profitable. If it loses 100 times what it wins and only wins 90% of the time then the strategy is useless; however if it wins 10% of the time and wins 100 times what it loses then it is ideal. If we consider pure net profit, we might find that the strategy makes an awful lot of money in a financial crisis but otherwise loses money. Again, this isn’t that useful.

Sharpe ratio also has a lot of limitations, but possibly one for a separate Google (or a later articlr) as honestly there are lots of debates on this despite its uses in the industry still.

So now’s a good time to take a step back.

Let’s say we have a great idea for a trading strategy. The first thing we will do is score it across our scoring systems over at least 10 years of data. The idea for our models is that we are looking for consistency across the period.

Whilst we do this we also need what we call our ‘common sense test’. That is can you adequately explain why the strategy works to a standard investor and does it make sense.

Once you have passed those two tests we then take it into our cost models. I’d say that less than 5% of strategies make it this far.

Our cost models then stress test the systems for slippage, spread, roll and any other costs we have. There are two real tests here; normal operating conditions and stressed conditions. Essentially we test whether the system can work in both the normal environment and if we got significantly stressed conditions (something like the financial crisis).

A lot of people forget this step when looking at systems, and the reason for this is that most retail trading platforms can’t really handle this because it is not only broker-dependent but also really challenging to model.

To pass, a system will need to get through both the normal and stressed cost models for fully scaled solutions. This normally means at various levels of AUM. Part of this also helps inform us how much we can run on each system to ensure we don’t hit capacity or overcrowd our own systems.

So at this stage, you’re talking about less than 1% of systems that get this far.

The final model is the risk model

Now we actually run cost and risk engines in a live environment, helping manage total costs and total risks across systems we run.

The risk model, though, is designed to look at the system’s total exposure, drawdowns, per trade risk, and how it will fare in both normal and stressed conditions. The key for a system to pass this test is consistency again. This can often be seen through strong risk reward principles in the strategy. Finally, we have one additional requirement; and that is for all our systems to run positive equity.

As you can imagine, we are now down to a tiny number of actually successful strategies that get to this stage.

Once successful though we get a file that states its limitations and when and how it can be included into a system.

Stay tuned for the next article in this series, where I will talk a little more about the types of strategies. If you can’t wait until then for more on quant models, watch my recent NinjaTrader webinar that I hosted recently, on ‘Basic Quantitative Systems – for Everyone’.

Got a question for Sam? Tweet him directly at @LFXSam or contact us here

Introducing our new Forex Trading Course: with indicators, trading plans & strategies!

Get your hands on all the trading plans, indicators and strategies required to gain a comprehensive understanding of the Order Flow trading techniques successfully used by the traders here at Littlefish FX. Find out more here.