Our latest research article is a follow up to the paper we published in April this year, where we introduced a framework for analysing and predicting regimes in the FX markets. In this update we extend the research to predict regimes in terms of level, e.g. what is the probability over the next hour that USDJPY will be in a high volatility regime, whereas previously we were focused on changes in volatility. In addition, we publish out of sample performance results for a wider set of currency pairs, with the model having a predictive performance ranging between 70-90% for both volatility and liquidity regimes. Please email email@example.com if you are a BestX client and would like to receive a copy of the paper.
In our latest research article we explore a method for attempting to estimate how ‘informed’ the trading activity is in the FX market over a given period of time. This method is based on the VPIN metric, or Volume Probability of Informed Trading, which has been applied in other asset classes. Using VPIN in OTC markets such as FX is more challenging due to the lack of absolute volume numbers, and moreover, within FX the diverse set of trading objectives (e.g. hedging, funding, retail etc) which add additional noise. However, in this paper we explore a method for computing this metric using an approximation of volume and discuss potential applications. The initial results do appear intuitive, for example, we observe an increasing proportion of ‘informed’ trading leading up to the 4pm WMR Fix. Our research into this field will continue as we think there are potential applications in enhancing our Pre-Trade module further e.g. in selecting an execution protocol or algo, or determining how long to run an algo for, or how passive to be etc. Please email firstname.lastname@example.org if you are a BestX client and would like to receive a copy of the paper.
How to make sure you are comparing apples with apples – complexities of peer analysis
A horse never runs so fast as when he has other horses to catch up and outpace.
We’ve been asked in the past to provide functionality to allow an institution to compare its execution performance against peers on a relative basis. This practice is widely used in equities and is often discussed and used at board level of both managers and asset owners. Applying peer analysis in a market such as FX is more complex in our opinion, largely due to the heterogenous nature of the participants who transact FX. As we have discussed before here, the fact that there are many different trading objectives arising from the varied nature of the FX market participants, means that it becomes more complex when applying a simple equity peer analysis to FX.
Like all BestX enhancements, peer analysis has been backed up by academic research. Recently, the ECB and IMF released a paper that indicated that transaction costs in the FX market are dependent on how 'sophisticated' a client is. They found that more sophisticated clients, e.g. ones that trade larger and more numerous tickets with a wider variety of dealers, achieve better execution prices than those that are less sophisticated. This highlights the need for another level of comparison; you might be performing well against a benchmark, but are you doing as well as your comparable peers against such a benchmark?
Classically, within peer analysis, you may decide to analyse net trading costs versus a specific benchmark (e.g. for equities, VWAP is commonly used) for a sector of the institutional community (e.g. ‘real money’ managers). However, for FX, the ‘real money’ manager peer group may be trading FX for many different reasons, which requires the use of a varied range of benchmarks and metrics to measure performance appropriately.
For example, within the real money community, many passive mandates may be tracking indices where NAVs are computed using the WMR 4pm. In this example the manager is very focused on minimising slippage to the WMR Fix benchmark, and other benchmarks such as Arrival Price or TWAP, may be irrelevant. It would therefore be inappropriate for a passive manager, whose best execution policy for these mandates is aimed to minimising slippage to WMR, to be compared to their peers on performance versus, for example, arrival price.
Enter the BestX Factors concept. BestX Factors was one of the core concepts upon which our post-trade product was built, allowing a client to select the execution factors, and performance benchmarks, that are relevant to their business and execution policy. Through BestX Factors, clients can select specific benchmarks, and apply these if required to only specific portfolios, or trade types etc. The BestX Peer Analysis module is also governed by BestX Factors, allowing clients to construct Peer Analysis reports that are specific to the benchmarks relevant to their style of execution.
Furthermore, a static report providing high level relative peer performance only provides a broad picture and can mask key conclusions that may help identify where attention and resources should be focused to help improve performance.
For example, having the ability to inspect the peer results by product adds an extra layer of value. It may be that a specific client is performing extremely well versus the peer group when it comes to the Spread Cost for Spot trades but may perform less well for outright Forward trades.
In addition, further breaking out results by currency pair helps isolate other aspects where performance may be improved. Results vs Arrival Price for G10 may rank in the top quartile, but NDF performance may look less impressive when compared to the peer group.
In our view, therefore, it is important that any Peer Analysis is done in a way that allows such granular inspection of the results, thereby allowing the product to add real value rather than simply tick another box, ‘yeah, sure, we do peer analysis, we consistently come in the top quartile’. Nice to know, but as with our philosophy in general across the BestX product, it seems sensible to use the analysis to add real value if possible.
A challenge of such granularity, of course, is ‘analysis paralysis’ as the amount of data can become overwhelming quite quickly. Nobody has the time to search through tables of results, trying to figure out the good and bad bits and what to do about it. We turn to another of our core philosophies here, which is turning big data into actionable smart data. Visualisation is critical in achieving this, and we return to our ‘traffic light’ concepts to help quickly highlight what is going on in a given portfolio of results.
The trophy icons simply indicate when a client is on the podium in terms of performance for that particular metric, whereas the traffic light colour indicates which percentile band the performance falls into.
One of the other key factors to consider, when conducting peer analysis, is the size of the pool of data to ensure that any output can be analysed with confidence. For this reason, we have waited to launch our Peer Analysis module until our anonymised community data pool for the buy-side had become large enough. We feel we are now in that position, with a buy-side anonymised community pool comprised of millions of trade records. We therefore launched our new Peer Analysis module for the buy-side as part of our latest release last weekend.
In conclusion, comparing performance to peers across the industry provides an additional input to the holistic view of best execution that the BestX software seeks to provide a solution for. It should be used in conjunction with the other metrics and analysis available throughout the product (for example, the fair value expected cost model) and clearly only focuses on relative performance. Peer analysis should, therefore, be used carefully, especially when applied to FX, due to the very heterogenous nature of the market. The concept of client ‘sophistication’ is interesting and one that we are exploring further to see if that can be added at a later date to provide an additional clustering of the data (i.e. to allow clients of similar sophistication to compare themselves).
Tomorrow belongs to those who prepare for it today
In years to come it feels we will probably look back at this period of history in the financial markets and recognise that this era was defined by incredibly rapid change, driven largely by technology. The democratisation of computing via the cloud, providing accessibility to cheaper, faster processing power is allowing the financial markets to harness the vast amounts of data produced and stored every day. This is not only delivering the ability to harness ‘big data’, but more importantly in our view, turn these vast data pools/lakes/oceans into actionable ‘smart data’.
So, what has any of this to do with Best Execution? Well, in this article we explore where we think the future of Best Execution may evolve, driven by this increasingly accessible technology, data and analytics.
Before we embark upon envisioning the future, lets recap on the present. To clarify what we are talking about it is worth reiterating the definition of best ex, at least from a regulatory perspective. MiFID II uses the following definition:
“A firm must take all sufficient steps to obtain, when executing orders, the best possible results for its clients taking into account the execution factors.
The execution factors to be taken into account are price, costs, speed, likelihood of execution and settlement, size, nature or any other consideration relevant to the execution of an order.”
Source: article 27(1) of MiFID
How does this regulatory definition translate in practice? In our view, best ex can be summarised from a practical perspective via the following, many of which we have explored in detail in previous articles:
· Best execution is a process
· Covers lifecycle of a trade, from order inception, through execution to settlement
· Requires a documented best execution policy, covering both qualitative and quantitative elements
· Process needs to measure and monitor the execution factors relevant to a firm’s business
· Any outlier trades to the policy need to be identified, understood and approved
· Requires continual performance assessment, learning and process enhancement, i.e. a best ex feedback loop
Our experience indicates that there is a wide spectrum of sophistication with regards to the implementation of best execution policies and monitoring. However, there are institutions at the ‘cutting edge’, who are clearly helping define the direction of travel for best execution in the future. As already discussed, technology, data and analytics are key enablers here, but the over-arching driver for change is the pressure on returns and need to control costs. This is resulting in the need for execution desks to increasingly automate workflows where possible as fewer traders are asked to trade more tickets, more products, navigate increasingly complex market structures and utilise a wider array of execution protocols and methods. With this in mind, it helps craft a vision of where best execution may evolve over the next decade.
To structure the vision, we define the process of best execution process into the following 3 components:
1. Order Inception
2. Order Execution
3. Execution Measurement
A general statement to kick off is that we feel the trend for flow to be split between high and low touch will continue, and this bifurcation will run as a theme throughout the future state. Traders will have to prioritise the tickets that require their attention, which may be the larger trades, the more complex trades, the less liquid etc. Broadly speaking, such trades will be defined as ‘High Touch’, and everything else will fall into the ‘Low Touch’ category. The rules for defining the boundary between high and low will obviously vary considerably by institution and will be a function of the size and complexity of the business, the number of available traders and the strategic desire, or not, to automate.
1. Order Inception
As summarised in the figure below, we anticipate that at the order inception stage, there will be a differentiation in high vs low touch orders in that the former will be generally subjected to some form of pre-trade modelling to help optimise the execution approach.
Such pre-trade modelling will ultimately be carried out at a multi-asset level, where appropriate, such that an originating portfolio manager would be able to assess the total cost of the order at inception, including any required FX funding or hedging, as illustrated in the example below:
A natural consequence of such pre-trade modelling at inception is the evolution of the relationship between the portfolio manager and execution desk, whereby the traders become ‘execution advisors’, having a proactive dialogue with the manager on the optimal execution approach to be adopted, via a combination of market experience and analytics.
Clearly, low touch flow would not follow the same protocols. Here it is likely that rules will be deployed to determine the optimal execution method, again driven by analytics and empirical evidence. However, it is still potentially relevant to have pre-trade estimates computed and stored to allow an ex-post comparison, which may be valuable over time to help refine the rules via ongoing monitoring and feedback.
2. Order Execution
Moving to actual execution, the pre-trade work done for high touch flow clearly allows informed decisions to be made, that are defensible and can be justified to best ex oversight committees, asset owners and regulators.
The definition of execution rules for low touch, based on objective analytics and empirical evidence, also allow such justification to be carried out. Some examples are provided below:
Trader oversight is obviously going to remain critical, allowing over-rides of rules at any point to accommodate specific market conditions on any given day. In addition, a key component will be to rigorously monitor the performance of the pre-trade modelling for high touch, and also the rules for defining execution methods for low touch. This performance monitoring, defined to focus on the execution objectives relevant for the institution in question, and will require a regular review of outcomes to establish if changes need to be made to any of the rules.
3. Execution Measurement
Monitoring and measurement of performance will be carried out at 2 levels: tactical and strategic. Tactical is required on a timely basis, and we would envisage the future state for this would be real-time, or near to real-time, which allows every trade to be monitored, both high and low touch, to identify any outliers to the best ex policy.
Strategic monitoring would be a more systematic measurement of performance over large, statistically significant data samples. Clearly, the frequency of this will be dictated somewhat by how active a given institution is e.g. smaller institutions may not generate enough trades to have a large enough sample to draw conclusions from, say, quarterly analysis. Larger, more active institutions may find that monthly reviews are appropriate. Clearly, market conditions may also dictate adhoc reviews, e.g. counterparty defaults, market structure changes, financial crises etc. However, whatever the frequency, it is essential that systematic oversight is required to ensure that the rules coded to drive the bulk of execution are producing satisfactory and optimal results. The old cliché of you can only manage what you can measure is never truer than in this particular case.
Although the future state we envisage in this article may take a number of years to evolve, and indeed, may evolve in a very different form, there are a number of trends that are already observable in the industry which indicate that significant change is coming.
Across both the buy and sell side, the trend to do ‘more with less’ feels unstoppable, and this will result in an increased deployment of technology and automation. However, we don’t anticipate a totally AI driven world, ‘staffed’ by trading bots. The random nature of financial markets makes it impossible to code for every eventuality, and this won’t change. Market experience and human oversight will always be required, albeit as we have indicated, traders are having to trade more tickets, be responsible for more products, whilst navigating more complex market structure and increased number of execution protocols and products.
At the core of this future state will be data driven insights and analytics, which will also satisfy the ever-increasing governance demands and evidencing of best execution, from both regulators and asset owners.
There are many potential applications for trying to understand what particular state or regime a market is currently in, and more importantly, what regime is predicted. For example, attempting to predict price momentum from an alpha or execution timing perspective, or predicting volatility and liquidity regimes to assist in execution decision making. At BestX, our regime research has initially focused on the latter and in order to provide a predictive component to the regime analysis we have employed the use of machine learning, a particularly hot topic in its own right with many different methods and approaches now available. Rather than simply choosing the most complex sounding method for quantitative and intellectual satisfaction, we have conducted a rigorous study of 6 different methods to determine which is the most appropriate to help solve our particular problem of predicting regimes in volatility and liquidity. Interestingly, we found that the more complex deep learning/neural net methodologies were not as successful for regime prediction as a simpler classification method. This has reiterated the importance to us of ensuring you pick the right tools for the job. If you are a BestX client and would like a copy of the research paper, please email us at email@example.com.
Our latest research attempts to tackle the long standing question of ‘how many participants should I have on my RFQ panel’. Not an easy question to answer, and for fear of sounding like a politician, the answer is ‘it depends’. On what? Well, it generally depends on how you trade and what your objectives are for example, if you trade full size and don’t tend to build into a position over a period of time, then market impact may not be a key priority and you may simply want to focus on minimising spread costs. If, however, you execute in slices, building into positions, then impact can have a significant impact on the goal of achieving best execution.
We explore a causal approach to answering the question, trying to balance the ‘sweet spot’ of minimising the spread paid, whilst at the same time, minimising the potential information leakage and subsequent market impact. Please email firstname.lastname@example.org if you are a BestX client and would like to receive a copy of the research.
In our latest research we explore the arcane subject of comparing different methodologies for measuring high frequency volatility. This can be more art than science, but a robust intraday volatility measure is extremely useful for many aspects of modelling. Within the BestX framework, we use such a metric to help estimate opportunity cost within our Pre-Trade module. Going forward we also plan to introduce a machine learning based model to help predict intraday volatility regimes, and this work serves as a building block for the more complex models to come. Please email email@example.com if you are a BestX client and would like to receive a copy of the article.
“Somebody said it couldn't be done”
Edgar A. Guest
At a recent fixed income conference, the title of the obligatory TCA session was ‘measuring the unmeasurable’. There are many in the industry that still hold this view, and that traditional TCA is either not relevant or impossible to measure within Fixed Income. However, at BestX, we would prefer to think along the lines of ‘some of it isn’t measurable, but for a large proportion of the Fixed Income market, it is possible to generate some meaningful analysis that can add value’. Admittedly, not as snappy a title for a conference session, but a fair reflection of the reality of the current status. In this article we draw on our own experiences of expanding our BestX product across Fixed Income to highlight some of the issues and how we’ve tried to address them.
There are many hurdles to leap when attempting to build viable analytics to measure TCA for Fixed Income markets. To begin with, it is worth clarifying that in our view, TCA is a somewhat misleading term. We see measuring transaction costs, in the traditional TCA sense, as an essential component of an overall suite of best execution analytics, that seek to add value across the lifecycle of the trade and best execution process. But only a component.
Let’s return to first principles and recap on what best execution actually is. Not in terms of the formal regulatory text definition but in practical terms, what does it really mean? We distil the essence of best execution into the following 6 bullet points:
1. Best execution is a process
2. Covers lifecycle of a trade, from order inception, through execution to settlement
3. Requires a documented best execution policy, covering both qualitative and quantitative elements
4. Process needs to measure and monitor the execution factors relevant to a firm’s business
5. Any outlier trades to the policy need to be identified, understood and approved
6. Requires continual performance assessment, learning and process enhancement, i.e. a best ex feedback loop
Ok, so if we agree this is what we are trying to achieve in order to deliver upon our fiduciary responsibility to asset owners and our desire to optimise performance, it is clear that a simple pdf or Excel report with a single cost number, measured versus a single closing price of the day, is not going to be sufficient. Clearly a technology solution is required, that can be configured to an institution’s specific business, trading protocols and best execution policy. This solution needs to measure the relevant performance factors, automate the outlier monitoring process and provide flexible, interactive reporting analytics to dive into performance results to seek areas where additional value can be extracted.
Within such a solution, as accurate a measurement of cost as possible is obviously extremely important, and this was the first challenge we sought to tackle in our expansion to Fixed Income. For non-Fixed Income people, it may not be obvious why this is such a challenge so for fear of stating the obvious, it is all about market data availability. The issues here are numerous, for example:
Now, we were hoping, as were many others, that the sunlit uplands post-MiFID II were going to be filled with overflowing volumes of cheap, if not free, market data that all market participants could readily delve into. This has not transpired. However, there are sources of quality, timely data available, at a price, and this is where we turned. We didn’t want to build a Fixed Income product that wasn’t commensurate with the quality of our award-winning FX product, so high quality market data was essential. However, given the sheer breadth and complexity of the Fixed Income market, where there are millions of securities traded around the world, there are always going to be gaps. Such gaps may be short term due to, for example, new issues, or more structural, for example, complex structured sections of the market just don’t trade in the conventional sense. This required thought when building the trade parsers and analytics engine to mitigate gaps in market data coverage and quality, a challenge made easier given the modern cloud-based technology stack we are working within at BestX.
With the best market data available, and applying innovation to the analytics, there is still the need for a healthy dose of pragmatism when measuring transaction costs in Fixed Income. Indeed, a client recently told us that in Fixed Income “a price is not a price, it is simply an opinion”! There are always going to be highly illiquid sections of the market that do fall within the unmeasurable category, but we have found that it is possible to construct accurate measures of mid for the vast majority of bonds. This then, obviously, allows decent spread cost numbers to be measured for a given time stamp(s).
Time stamps. Another data issue altogether, although one we are familiar with from FX land. Fixed Income execution is becoming increasingly electronic and automated, a required development as buy-side execution desks are increasingly asked to do ‘more with less’, with traders having to execute more tickets, become responsible for more products and develop experience in more execution protocols. From the analysis we have done so far, time stamping for trades executed over the various MTFs all look pretty robust, as you would expect. Issues tend to arise in voice executed business, although here the quality of time stamps is improving post MiFID II. It goes without saying but we will say it again anyway, it is impossible to measure anything accurately without a decent time stamp.
Issues around data accuracy: trade data, time stamps, market data and benchmarks appear to be the number one priority for clients when research surveys are conducted. Getting all this as right as possible is deemed much more important than, for example, automated connectivity with EMS/OMS platforms, at least for now. We obviously expect this demand to rise going forward once the basics are in place.
Making it actionable
Another of the common complaints around applying TCA in markets such as Fixed Income is ‘what do we do with the output?’. Measuring a simple cost number on an infrequent basis doesn’t lend itself to making any such analysis actionable. This is why it is key to implement any TCA metrics as part of a best execution workflow and lifecycle.
Back in October 2016 we talked about feedback loops in the best execution process and applying the concept of marginal gains to improve performance. There are many decision points in the execution process where additional data-driven evidence can help the trader make a more informed decision, for example:
- What time of day should I trade?
- What size of trade should I execute?
- Who should I execute with?
- Should I trade principal or request the counterparty to act as agent for my order?
- If I trade principal, should I trade via the phone or electronically?
- If I trade electronically, which platform should I trade on?
- Should I hit a streaming firm electronic price, or should I trade RFQ?
- If I RFQ how many quotes should I request?
- How quickly should I trade?
To ensure any output from your Fixed Income analytics can be actioned it is essential to have the following components to your TCA/best ex platform:
1. Intuitive, graphical interface which allows large data sets to be queried and filtered quickly
2. Enough data to make the performance results significant
3. Timely information and analysis
4. Ability to get into detailed diagnostics on a trade by trade level if required
We were already aware how important the last point was whilst building out our FX product. However, Fixed Income, with its more complex analytics, has increased this requirement even further. For example, it is imperative to be able to understand where a specific cost number is coming from, down to an understanding of which benchmark bonds were used to construct the yield curve if there weren’t sufficiently non-stale prices available in the specific security.
So, is Fixed Income TCA measurable? For a large part of this diverse and complex market, we think it is. Is it perfect? No, but our philosophy has always been to be pragmatic, rigorous and thoughtful when building what is possible under the given constraints. Getting such a first iteration out and used by clients allows us to evolve and improve over time, whilst at the same time hopefully benefitting from improved availability, quality and coverage of market data. For example, who knows, but a Fixed Income Tape, as mandated by MiFID II, may even appear one day.
To quote Edgar A. Guest’s tremendous poem again,
Somebody scoffed, "Oh, you'll never do that
At least no one ever has done it."
But he took off his coat, and he took off his hat,
And the first thing we know, he'd begun it.
With a lift of his chin and a bit of a grin,
Without any doubting or "quit-it".
He started to sing as he tackled the thing
That couldn't done. And he did it.
Ok, in true BestX style, there hasn’t been a lot of grinning, and certainly no singing, but we have done it.
This brief article continues our series of case studies for the practical deployment of the BestX execution analytics software. Please note these case studies should be read in conjunction with the BestX User Guide, and/or the online support/FAQs. This latest article explores the Regulatory Reporting module of BestX, and how we feel this can be utilised to actually make the RTS28 reporting obligations under MiFID II to be of some value. We can hear the cries of derision upon reading this statement, but trust us, we do think that it is possible to make some of the reporting under RTS28 add real value to your best execution process and the overall goal of improving performance.
Generating the classic Top 5 report appears to add little value on its own – however, if this is produced in conjunction with a report that summarises performance according to your best execution policy (or in MiFID II speak, Order Execution Policy, as defined under Article 27(5)), then we enter the realms of added value. Assessing your top Venues, Counterparties and/or Channels (e.g. MTFs) on an overall performance basis and then marrying this up with where your actual volumes are executed can produce some interesting insights, and potentially actionable items to help improve performance going forward. This is one of the intended, as opposed to one of the many unintended, consequences of MiFID II and requires a systematic, and configurable, framework to measure and compare performance.
If you are a contracted BestX client and would like to receive the full case study please contact BestX at firstname.lastname@example.org.
This brief article continues our series of case studies for the practical deployment of the BestX execution analytics software. Please note these case studies should be read in conjunction with the BestX User Guide, and/or the online support/FAQs. Given that 2018 is drawing to a close, the focus of this third case study is the generation of year-to-date reports.
BestX provides a multitude of different reporting types, that have many different purposes and use cases. In this case study we cover 3 of the most common:
Generating annual post-trade performance reports – used for internal best execution review meetings etc
Generating annual exception report summaries – useful for showing documentary evidence of the implementation, monitoring and adherence to a firm’s best execution policy
Generating annual total consideration reports – useful for high-level cost summaries for client accounts etc
The flexibility of BestX allows users to run such reports on the entire portfolio, or configured to specific elements of the year’s trading activity to analyse key components.
If you are a contracted BestX client and would like to receive the full case study please contact BestX at email@example.com.
At BestX we are continuing to research methods for enhancing measurement of expected costs within the OTC markets. A parametric based model provides a good approximation, but runs the risk of becoming increasingly complex to try to model all edge cases, especially for less liquid currency pairs, or times of day or larger sizes. We have been conducting research in parallel to develop a new framework, which is purely data driven using machine learning methods. More work needs to be done but this paper provides results for this framework when applied to the measurement of the forward component of FX transaction costs, a notoriously difficult part of the market to model given the voice driven, OTC nature of this part of the market. For a copy of the paper please email us at firstname.lastname@example.org.
This brief article continues our series of case studies for the practical deployment of the BestX execution analytics software. To receive the full case study please contact BestX at email@example.com. Please note these case studies should be read in conjunction with the BestX User Guide, and/or the online support/FAQs. The focus of this second case study is the monitoring of trends in execution performance, a key component of the feedback loop required within a best execution policy. The BestX Trend module allows the construction of tailor-made analyses of trends in any of the different metrics that BestX computes, thereby providing flexibility to add value to any institution’s specific policy and process.
Monitoring execution performance over time, and learning lessons to help refine the policy and process further, is not simply something to satisfy any regulatory or fiduciary responsibilities. In our view it is a fundamental part of improving performance to improve returns, and can be applied to many constituents of the execution process, for example:
Monitoring liquidity provider performance over time, by product, by ccy pair, by trade size, by time zone etc
Monitoring algo product performance
Assessing how different venues perform over time
Are they any market structure changes occurring that are resulting in different execution methods performing differently e.g. RFQ vs streaming prices?
Is market liquidity changing over time, e.g. is my business creating more impact than it used to?
We explored the concepts of feedback loops and marginal gains in a previous article, published in Oct 2016. This case study helps put concepts into practice using the BestX software.
To receive the full case study, please email us at firstname.lastname@example.org
Style is a simple way of saying complicated things
A common request from many of our clients over recent months has been to categorise algo products into a number of different styles. This is far from straightforward given the plethora of algo products now available (in BestX alone we have now seen 113 different algos across many providers), and also due to the fact that the market, and product innovation, continues to evolve rapidly. Given this breadth and complexity, it is probably pragmatic to start with a reasonably simplistic taxonomy of styles, which can always be refined over time. In this short article we introduce our initial family of algo styles based on a number of discussions with our clients.
To kick off, we are proposing 4 key algo styles, summarised in the diagram below, which stem from the initial key objective: is the algo attempting to achieve a specific benchmark or not?
For non-benchmark algos, we have suggested 2 style groups:
1. Get Done
Whereas for algos that are designed to minimise specific benchmark slippage, we have suggested an additional 2 style groups:
3. Interval Based
4. Volume Based
In addition, for each algo there are additional attributes that can be used to describe behaviour:
1) Limit – whether the algo had a limit price applied to it or not, and,
2) Urgency – a data field describing the urgency, or aggressiveness, of the algo. There are many different forms of Urgency used in the market, so to simplify, we are condensing into 3 values: Low, Medium or High
The additional attributes are important to allow an apples vs apples comparison, for example comparing a sample of algos within a category where some have hit limits and others have not could pollute the performance results. Equally, if you were analysing a group of Opportunistic algos, it would be preferable to stratify the sample into groups with similar Urgency settings.
Going into these 4 categories in a little more depth:
1. Get Done – this family of algos is expected to include more aggressive algo types where the priority is less on minimising market impact or earning spread, but more focused on getting a specific amount of risk executed as quickly and efficiently as possible. Many providers offer products that are named ‘Get Done’.
2. Opportunistic – this group is anticipated to include an array of products, that don’t have a specific benchmark they are looking to achieve. Algos falling within this group are expected to include the array of products in the market that are seeking to maximise spread capture, unencumbered by a strict schedule dictated by a benchmark. Urgency is often a key parameter within this style, determining how passive the algo is prepared to be in order to earn spread.
3. Interval Based – this group includes all algos that are attempting to minimise slippage to a benchmark where the algo slices the parent notional according to an interval, or time, based schedule. So, for example, this group would include all TWAP algos, the most commonly used algo within the FX market currently.
4. Volume Based – we anticipate this group to be the most sparsely populated given the largely OTC market structure of FX. Products such ‘Percentage of Volume’ or ‘POV’ would fall within this style, which are algos attempting to execute in line with a specified % of volume traded within the market. This algo style has been adopted from the listed markets, where it is obviously easier to target a % volume target given the availability of volume data. In FX, any target will be approximate given the inexact nature of total volumes traded at any given point. VWAP, or Volume Weighted Average Price, algos would also fall within this style, but again, they are less common in FX given the difficulties in measuring actual volumes.
The proposed taxonomy is, by construct, a simplification of a complex ecosystem of algos, which will result in compromises to be made when categorising products. Hopefully, the additional Limit and Urgency fields, will help with the grouping to some extent. The objective of this exercise was to propose something simple, pragmatic and understandable, whilst trying to provide a decent representation of the current algo product array. The Algo Style field will be going live within the next BestX release. We recognise this is likely to be an iterative process, but we felt it was important to respond to client demand and take the initiative. We do not seek to impose our view on the market, but as always hope for feedback, and expect that this concept will continue to evolve over time as we seek to represent the majority view in the market
This brief article introduces the first in a series of case studies for the practical deployment of the BestX execution analytics software. To receive the full case study please contact BestX at email@example.com. Please note these case studies should be read in conjunction with the BestX User Guide, and/or the online support/FAQs. The focus of this first case study is the identification of outlier trades, and the management of the workflow around identification, explanation and approval of such exceptions. We have explored this topic conceptually in an early article, ‘Red Flags and Outlier Detection’, whereas this case study further explores how to implement the concept in a practical context using the BestX software.
There is a clear consensus across the industry that the key element of any best execution policy is the process, and not necessarily individual outcomes of specific trades. The objective is to implement and monitor a best execution policy, and then refine it over time based on the iterative process of reviewing performance on an ongoing basis. A core component of this process is the identification of trades which are exceptions to the policy to help provide insights into where the policy may need refining. The BestX product allows an institution to ‘codify’ its specific best execution policy, allowing user defined thresholds to specify exactly which trades should be identified as exceptions.
At BestX we have now observed many different use cases for the exception reporting functionality, and not all are for compliance purposes. For example, systematic identification of particular algos that create significant market impact in a chosen group of currency pairs, may be a useful input into refining the best execution policy around algo selection. Common examples of exception reports include:
1. Notification of any trade where the actual spread cost incurred is greater than the estimated BestX expected cost
2. Identification of any trades that breach agreed commission rates to, for example, the WMR mid benchmark rate
3. Identification of trades generating significant pre or post trade market impact
4. Identification of any trades falling outside the trading day’s range
5. For a matched swap portfolio, identification of any trades where the cost arising from the forward points exceeds either the BestX expected cost, or a defined threshold
6. Identification of algo trades creating potential signaling risk, or significant market impact
Clearly, identification of outlier or exception trades is a critical component of best execution. It is also essential that when implementing such a process, flexibility is required, both in terms of which metrics you wish to monitor, and also the thresholds you specify for these thresholds, above which defines an outlier trade to your policy. We have learnt at BestX that there really isn’t market consensus or a ‘one size fits all’ approach to defining outliers across the industry. To receive the full case study, please contact us at firstname.lastname@example.org
Prediction is very difficult, especially if it's about the future
In the context of achieving best execution, there is a growing focus on the pre-trade stage of the process. Accessing the most appropriate information at the point of trade can help execution desks make informed decisions around their various execution choices, including timing and style of execution.
When making an informed decision at the point of trade, one key input is an understanding of the prevailing liquidity conditions, and if not trading via risk transfer, an indication of how those conditions will develop during the period that the order is worked in the market. This is not as straightforward as it sounds, even for a listed market where liquidity data is readily available. For a market such as FX, which has a complex, hybrid market structure, with liquidity fragmented over many venues, it is extremely difficult.
Hence the need to look for help outside of traditional financial theories and practices. At BestX we are always creatively looking to solve problems by thinking laterally and learning from other disciplines and industries.
We are aware that the statistical properties of trading activity in FX exhibit clustering behaviour, similar to volatility, in that periods of high trading activity are more likely to be followed by another period of high trading activity. This ‘memory’ effect is evident, for example, around key points in a trading day such as the release of key economic data, times of key benchmarks such as WMR etc. This behaviour led us to exploring a statistical framework that was first used to model the impact of earthquakes in terms of predicting the frequency and magnitude of subsequent tremors. This is analogous to the observed characteristics of trading activity and volumes, i.e. there is a ‘shock’, for example an unexpected interest rate rise from a central bank, which results in a significant spike in trading volume, which then proceeds to have ripple effects throughout the market over an ensuring period of time. Clearly, there would be significant value if it would be possible to have some form of estimate of how the ensuing liquidity conditions would evolve, both in terms of magnitude and frequency.
Before we explore such a framework further, lets just return to why this should be of any use whatsoever? Well, for example, if you are looking to execute a large block of risk, say 1bn USDJPY, and have discretion on the execution approach and timing, then having some idea of how volumes in USDJPY may evolve over the next hour could be a key input in your decision making. If you’re objective is to minimise spread cost, and you are therefore leaning towards selecting an algo designed to be passive in the market and ‘earn’ spread, key variable for the performance of such an algo will be the depth of liquidity available to the algo whilst it is working. If, from previous experience, you know that the algo you are planning to use performs particularly well in market states characterised by high volumes, then a prediction of volumes will help decide whether to press ahead with this strategy or perhaps select a different product. Or, you may be seeking to minimise market impact and wish to time the trade to coincide with a high volume period to help ‘hide’ the trade. Insights into when such periods may occur over the next few hours is clearly helpful in this decision making.
Back to earthquakes. A statistical framework called the Hawkes Process has been used for some time for modelling earthquakes. This framework relies on historical information but is ‘self exciting’, which is the technical term for describing the phenomena where when an event occurs, e.g. an earthquake, then the probability of more events occurring, i.e. tremors, increases. From a volume prediction perspective, this can be thought of as a spike in trading activity due to a data release, will generally increase in increased subsequent trading activity. Over time, assuming there aren’t any further events, then the trading activity will revert back to its normal state. The Hawkes Process attempts to capture this behaviour.
As a brief aside, just to further clarify a key point – we are not attempting to predict an actual earthquake, or market shock. We are trying to predict what the likely impact of such an event will be. We were once asked if BestX had a model for predicting ‘flash crashes’, the query was met with an incredulous pause. If we had a model for predicting flash crashes then it is highly likely that the BestX business model would be very different, and probably involve relocating to the Cayman Islands and making lots of money.
Going back to Hawkes. Importantly, the hierarchical nature of the framework allows for the capture of the effect of different volume distributions typically observed on different days of the week. Also, the amount of trading also varies, with the quietest day of the week usually seen on Mondays, and Wednesdays showing, on average, the largest volumes. The charts below display the average intraday volume profiles for USDCAD over the course of 2016, also highlighting how different the pattern is on days such Non-Farm Payroll days, where trading activity is concentrated around the time of the data release.
So, does it work? Well, still early days but the initial results look very encouraging. The charts below show the predicted volume profile compared to the actual profile (black line), and the shaded area represents the 95 percentile confidence interval for this prediction.
We are now going to put the research into a development environment which will allow a more systematic testing of the framework, across many more currency pairs and using a much deeper data set to ‘train’ the framework. In addition, we wish to model other key event days such as ECB, FOMC meetings days, and also month, quarter and year-ends. Assuming the framework continues to perform to our satisfaction, we will then look to incorporate this within the BestX Pre-Trade module at a later date.
“Turn and face the strange”
In our latest paper we continue our research into measuring the impact of the ‘turn’ within the FX markets. This somewhat strange phenomenon, which manifests itself around key dates throughout the year, is generally caused by supply and demand for funding by large financial institutions, which can create dislocations in the forward curves for certain currencies. In this latest research we empirically measure the impact of the turn around a range of different dates, including year, quarter and month end, but also event days such as NFP, FOMC etc. In summary, we find that the impact is most significant for year and quarter ends, with, for example, an average magnitude at year-end of between 0.6 and 1.5 bps for EURUSD depending on the tenor of the transaction. The work has helped us prioritise where to adjust the forward curve interpolation to better estimate mid for broken dates. To receive a copy of the paper please email us at email@example.com
“I love deadlines. I like the whooshing sound they make as they fly by”
MiFID II is here. From all accounts, January 3rd 2018 was similar to Y2K day, in that everyone woke up, went to work and, generally speaking, everything carried on as usual. It is clear that the ‘soft’ launch of MiFID II has resulted in no discernible disruption from a liquidity or execution perspective, but there are a number of looming elephants in the room that were postponed e.g. the additional 6-month grace period for assignment of LEI codes. So, those that were waiting for January 3rd to come and go in the hope that they could leave MiFID II behind them, and get on with their day jobs, are going to be disappointed. 2018, and probably beyond, will continue to have a significant MiFID II focus as much remains to be done.
One of the next key dates in the implementation timetable is April 30th, by which time, institutions will need to have submitted their RTS 28 reports. RTS 28 encompasses many aspects of the best execution obligations for an institution, and represents a large data gathering, cleansing and reporting exercise. That is burdensome enough, but it is further complicated by ambiguity in what exactly needs to be reported, especially for an OTC market such as FX.
If we look at the RTS 28 Top 5 report alone, which is the only RTS 28 report where the legislation provides a specific template, then ambiguity exists even here, and can be summarised in the following areas:
a) Venue vs Channel vs Counterparty
For the FX market, with a hybrid market structure of both quote- and order-driven activity, there is confusion over the definition of these terms. If you are executing an RFQ order, over a multi-dealer platform (e.g. FXall or FX Connect), with a panel of 5 liquidity providers then you could define the multi-dealer platform as the Channel, and the winning liquidity provider as the Counterparty. So, in this example, there is no Venue? But what if the multi-dealer platform is an MTF? Clearly, even in the simplistic case of an RFQ trade there is scope for confusion.
In the case of an algo trade, that has been initiated via a multi-dealer platform, with a bank, then additional complications arise. The bank’s smart order router will be directing the algo child fills across multiple venues, so in this case Channel, Counterparty and Venue for each child slice, at least, of the algo would appear clear. However, if the algo was spot and not linked to an underlying securities transaction, i.e. does not fall into the ‘Associated Spot’ category for MiFID II reporting purposes, then technically speaking this trade should not be included in RTS 28 reporting. But, once the algo had completed, what if forward points were then applied to the algo spot rate to roll the trade forward? The parent trade is no longer spot, and does now fall within MiFID II reporting requirements.
b) Passive vs Aggressive
Again, for the hybrid world of FX where there is still a very large proportion of quote-driven business, how should the definition of passive or aggressive be applied? Reading the regulatory text would indicate that any trade which has paid bid-offer spread is technically an aggressive trade, whereas ‘earning’ spread would constitute a passive fill. There are conflicting views in this across the industry. For many of our clients, these fields are generally ignored for FX if they do not execute any of their business via orders or algos, or have direct market access. For orders and algos, however, data is provided by the majority of liquidity providers on whether the order was filled passively or not. This is not yet consistently available across the industry yet, or provided in a consistent format, but is becoming increasingly prevalent.
For many mandates, FX transactions are ‘directed’ to a specific counterparty under the terms of the IMA. Such transactions should be split out and identified in the Top 5 report. However, many asset managers net transactions across portfolios, the net execution result of which is then allocated back across the individual accounts within the block. This can potentially result in complications whereby trades for non-directed accounts can be included in a directed block, as there was a benefit from a netting perspective, so the parent block can no longer simply be included in Top 5 directed field. This would need to be done at the level below, i.e. individual allocations or child trades, so the concept of multi-tier trade hierarchies are required.
Other reporting requirements
RTS28 is not just about supplying a Top 5 report. Analysis of the execution obtained across these Channels, Counterparties or Venues is also required with a view to understanding if there is consistency across allocated volume and performance. But the definition of performance is no longer simply ‘best price’. Indeed, the MiFID II definition of best execution refers to a range of factors, including price, some of which may be relevant to some institutions in they way they execute in a hybrid FX world, some of which won’t be. Clearly, these factors need to be defined, prioritised and set in accordance with each institution’s best execution policy. Only when this has been done can any view of overall ‘performance’ be measured, aggregated and reported.
Over time it is fair to assume that these ambiguities will decrease as market consensus develops and further guidance from bodies such as ESMA is provided, especially once a review post the first reporting cycle is concluded. In the meantime, however, institutions are figuring out for themselves. At BestX, our approach has been to take outside counsel advice from Linklaters, which has helped provide clarity on reporting requirements in addition to the Top 5 report (e.g. the approach taken to the associated performance reports), and also to ensure that the reporting software is as flexible as possible to accommodate different interpretations and requirements.
BestX allows an institution to define exactly what execution factors are relevant for their specific business and best execution policy. This allows a customised measure of performance to be constructed across any entity, including Channel, Counterparty and Venue. This framework forms the foundation for our Regulatory Reporting module, which allows a client to fully customise and configure exactly what they would like to include in their RTS 28 Top 5 report and also generates the associated performance reports. For example, some clients may wish to generate Top 5 reports for Channel, Counterparty and Venue. Some clients have made the decision to include all spot transactions, regardless of whether the trades are associated or not. Given the delay in LEI code assignment, we also allow reports to be constructed without this official designation to at least ensure that the first round of reports in April can be generated.
It is clear that regulators are looking for evidence of a best efforts approach to satisfying the reporting requirements, so a pragmatic and flexible approach is probably a decent strategy in these early months of a post January 3rd 2018 world.
 For younger readers, this relates to January 1st 2000, when the world waited with bated breath to see if computers would continue to function
 Please contact us if you would like further information on this legal opinion (firstname.lastname@example.org)
“Big Data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it.”
As part of our ongoing quest to enhance our analytics, and to continue to meet our clients requests, we have been spending considerable time over the last few months researching ideas to model the expected cost arising from the forward point component of FX transactions. Such a model would complement our existing framework for estimating expected costs for the spot component.
This research is far from straightforward. The FX Forward market is still a largely voice driven market, often with significant biases in pricing arising from supply and demand or basis factors. This results in a general lack of high quality data with which to perform rigorous modelling. At BestX, however, we do now have a unique data set of traded data that allows for such research and we hope this will provide the foundation for the production of such a model.
We have decided upon a 2 phased approach. Phase 1 will be a relatively simple, yet pragmatic, extension of our existing parametric model for expected spot costs. We plan to launch this in Q1 to meet the initial need for a fair value reference point for the costs arising from forward points. Phase 2 is a longer term project, which will take us down the road of a data-driven approach as there are indications that a parametric model will have limitations when attempting to model the foibles of the forward FX market. We are already planning for this and have started research into using machine learning methods, including nearest neighbour algorithms, to cope with the complexity of this market. As part of this research, one of the initial pieces of work was to try to understand what the key drivers for FX forward costs actually are as we are aware of the risks of utilising machine learning on big data sets without an understanding of the fundamentals. We have summarised the initial findings of this work here.
Not everything that can be counted counts, and not everything that counts can be counted.
The demand for transparency within the execution process has increased significantly over recent years within the FX market. Indeed, BestX was founded to try to help meet this demand and we have adopted this theme within everything we do. We set out to build a market-leading set of analytics and ensure that all of our clients have total transparency around the workings of these models. Such analytics can only add real value if they are powered with the highest quality market data. Transparency around the market data inputs used is therefore also critical and we have invested significantly in order to build a comprehensive view of the FX market. This article explores some of the thinking behind our approach and why we believe it is important to generate the most broad, independent and representative view of the market.
In an OTC market such as FX one of the biggest challenges when trying to compute accurate execution metrics is gathering a data set which fulfils the following criteria:
Below are some of the common themes and challenges in building such a data set.
· Breadth and independence of data
One of the most common topics when discussing market data and benchmarking is the breadth of sources used and the independence of such sources. Independence and the complete absence of any bias is critical in delivering a market standard for FX best execution metrics. Computing a mid based on such a broad array of liquidity providers globally is far more valuable than generating a potentially skewed mid based on a specific sector of the market. For example, if a mid were computed based on liquidity sources biased towards to non-bank high frequency traders, this would clearly be inappropriate for use in estimating costs for large institutional asset managers. BestX takes market data from over 100 liquidity providers, supplied to BestX through a number of pipes, one of which is the Thomson Reuters pipe in addition to ICE and EBS. Thomson Reuters is not the only source, and even if it was, it is not a single price as data from all of the individual liquidity providers is accessed.
· Generating benchmarks based on client specific liquidity providers
This is an interesting point and one which we debate frequently. Aside from the fact that regulations such as PRIPPS stipulate gathering data from as representative set of sources as possible, we believe that for the institutional market it is important to portray a view of the total market. To simply compute costs based on a client’s specific liquidity sources is self-reinforcing and could be argued is not satisfying best execution as perhaps there are other sources out there that a client could access but currently doesn’t? In addition, there is a growing demand for one level playing field to compute costs across, that could be used to meet demands for, for example, peer analysis. If the market data set is tailored for each client in this universe then we would always be comparing apples and oranges.
At BestX, we do also provide the ability for clients to submit their quote data, which we will use as additional benchmarks if so desired, as some best execution policies require this. However, we provide these metrics in addition to the spread to mid costs based on the full market-wide data set.
· Internal pools
We would argue that, even if it were available, would data from liquidity provider’s internal pools add any value when trying to assess price discovery and generating a market mid? The price forming data and flow is available via the lit electronic marketplace, where liquidity providers risk manage the ‘exhaust’ of their inventory. The activity of internal pools is interesting, although would not add value in determining the market mid at any one point in time, e.g. having offsetting trades match and internalise wouldn’t necessarily change where the external market is trading.
There is clearly significant value within the overall best execution outcome through internalisation, and we measure this via other factors to demonstrate this value (e.g. through post-trade market revaluation and impact metrics).
· Timeliness of data
There is a lot of focus on market data sources and independence, and rightly so. In addition, however, there is also a requirement to ensure that data is timely, especially in the FX market. Using stale data, for example, snapped at 1 or 5 min intervals or worse, can obviously potentially generate erroneous cost and slippage metrics. It is imperative to be gathering data on a millisecond frequency and in real-time to allow for immediate transaction analysis if required.
· The FX Tape and other potential sources
The recent announcement of the launch of a tape for the FX market is an interesting development. Clearly, this is an initial step and there are many questions still around exactly what will be available, at what cost and with what lag. It may be that it could provide BestX, and all other providers, with an additional ‘official’ source of traded price data, although for it to be truly representative it will require all of the large liquidity providers to participate fully. This would, obviously, be extremely valuable and could be used in addition to the broad market data set we already consume and aggregate.
Equally we will be following the evolution of what trade data becomes available via the APAs once MiFID II goes live. It is unclear at this stage exactly what will be available and how timely the data will be, but it could provide an additional source. The trade data that became available following Dodd-Frank disappointed to some extent as it wasn’t rich enough to use for rigorous analytical purposes, so we are reserving judgement on the potential data riches that may flourish from MiFID II until we can actually see it.
We don’t generate pools of liquidity adjusted for different credit quality or capacity. The philosophy is to generate a representative picture of the institutional market that can be broadly applied to compare and contrast performance and cost metrics. Additional benchmarks can be customised on a bespoke basis to service specific liquidity pools if required.
OTC markets make the provision of representative, accurate TCA metrics difficult. FX doesn’t have a National Best Bid and Offer (NBBO), there isn’t a source of public prints and there is little consistency across the industry in terms of what data is made available. The current situation may obviously change over the next few years, for example, via the FX tape or a shift to an exchange based market structure, but it seems unlikely to happen in the medium term. We have taken the pragmatic, and rigorous, approach to gathering as much high quality data that we can and use it in a thoughtful way across a suite of analytics. One of the core tenets of BestX is the delivery of an analytics product that is totally free from any conflict or bias. Independence and total transparency is therefore critical, both in terms of the analytics and input market data.
The release of the final Global Code of Conduct (“Code”) on 25 May 2017 is a watershed moment for the foreign exchange (FX) market. The FX market, which is a global decentralized market for the trading of currencies, is the largest market in the world in terms of trading volume, with turnover of more than $5 trillion a day. The Code was developed by the Foreign Exchange Working Group (“FXWG”) working under the auspices of the Markets Committee of the Bank for International Settlements (“BIS”). The Code was also created in partnership with a diverse Market Participants Group (MPG) from the private sector. A Global Foreign Exchange Committee, formed of public and private sector representatives, including central banks, will promote and maintain the principles.Read More