In our latest research we explore the arcane subject of comparing different methodologies for measuring high frequency volatility. This can be more art than science, but a robust intraday volatility measure is extremely useful for many aspects of modelling. Within the BestX framework, we use such a metric to help estimate opportunity cost within our Pre-Trade module. Going forward we also plan to introduce a machine learning based model to help predict intraday volatility regimes, and this work serves as a building block for the more complex models to come. Please email firstname.lastname@example.org if you are a BestX client and would like to receive a copy of the article.
“Somebody said it couldn't be done”
Edgar A. Guest
At a recent fixed income conference, the title of the obligatory TCA session was ‘measuring the unmeasurable’. There are many in the industry that still hold this view, and that traditional TCA is either not relevant or impossible to measure within Fixed Income. However, at BestX, we would prefer to think along the lines of ‘some of it isn’t measurable, but for a large proportion of the Fixed Income market, it is possible to generate some meaningful analysis that can add value’. Admittedly, not as snappy a title for a conference session, but a fair reflection of the reality of the current status. In this article we draw on our own experiences of expanding our BestX product across Fixed Income to highlight some of the issues and how we’ve tried to address them.
There are many hurdles to leap when attempting to build viable analytics to measure TCA for Fixed Income markets. To begin with, it is worth clarifying that in our view, TCA is a somewhat misleading term. We see measuring transaction costs, in the traditional TCA sense, as an essential component of an overall suite of best execution analytics, that seek to add value across the lifecycle of the trade and best execution process. But only a component.
Let’s return to first principles and recap on what best execution actually is. Not in terms of the formal regulatory text definition but in practical terms, what does it really mean? We distil the essence of best execution into the following 6 bullet points:
1. Best execution is a process
2. Covers lifecycle of a trade, from order inception, through execution to settlement
3. Requires a documented best execution policy, covering both qualitative and quantitative elements
4. Process needs to measure and monitor the execution factors relevant to a firm’s business
5. Any outlier trades to the policy need to be identified, understood and approved
6. Requires continual performance assessment, learning and process enhancement, i.e. a best ex feedback loop
Ok, so if we agree this is what we are trying to achieve in order to deliver upon our fiduciary responsibility to asset owners and our desire to optimise performance, it is clear that a simple pdf or Excel report with a single cost number, measured versus a single closing price of the day, is not going to be sufficient. Clearly a technology solution is required, that can be configured to an institution’s specific business, trading protocols and best execution policy. This solution needs to measure the relevant performance factors, automate the outlier monitoring process and provide flexible, interactive reporting analytics to dive into performance results to seek areas where additional value can be extracted.
Within such a solution, as accurate a measurement of cost as possible is obviously extremely important, and this was the first challenge we sought to tackle in our expansion to Fixed Income. For non-Fixed Income people, it may not be obvious why this is such a challenge so for fear of stating the obvious, it is all about market data availability. The issues here are numerous, for example:
Now, we were hoping, as were many others, that the sunlit uplands post-MiFID II were going to be filled with overflowing volumes of cheap, if not free, market data that all market participants could readily delve into. This has not transpired. However, there are sources of quality, timely data available, at a price, and this is where we turned. We didn’t want to build a Fixed Income product that wasn’t commensurate with the quality of our award-winning FX product, so high quality market data was essential. However, given the sheer breadth and complexity of the Fixed Income market, where there are millions of securities traded around the world, there are always going to be gaps. Such gaps may be short term due to, for example, new issues, or more structural, for example, complex structured sections of the market just don’t trade in the conventional sense. This required thought when building the trade parsers and analytics engine to mitigate gaps in market data coverage and quality, a challenge made easier given the modern cloud-based technology stack we are working within at BestX.
With the best market data available, and applying innovation to the analytics, there is still the need for a healthy dose of pragmatism when measuring transaction costs in Fixed Income. Indeed, a client recently told us that in Fixed Income “a price is not a price, it is simply an opinion”! There are always going to be highly illiquid sections of the market that do fall within the unmeasurable category, but we have found that it is possible to construct accurate measures of mid for the vast majority of bonds. This then, obviously, allows decent spread cost numbers to be measured for a given time stamp(s).
Time stamps. Another data issue altogether, although one we are familiar with from FX land. Fixed Income execution is becoming increasingly electronic and automated, a required development as buy-side execution desks are increasingly asked to do ‘more with less’, with traders having to execute more tickets, become responsible for more products and develop experience in more execution protocols. From the analysis we have done so far, time stamping for trades executed over the various MTFs all look pretty robust, as you would expect. Issues tend to arise in voice executed business, although here the quality of time stamps is improving post MiFID II. It goes without saying but we will say it again anyway, it is impossible to measure anything accurately without a decent time stamp.
Issues around data accuracy: trade data, time stamps, market data and benchmarks appear to be the number one priority for clients when research surveys are conducted. Getting all this as right as possible is deemed much more important than, for example, automated connectivity with EMS/OMS platforms, at least for now. We obviously expect this demand to rise going forward once the basics are in place.
Making it actionable
Another of the common complaints around applying TCA in markets such as Fixed Income is ‘what do we do with the output?’. Measuring a simple cost number on an infrequent basis doesn’t lend itself to making any such analysis actionable. This is why it is key to implement any TCA metrics as part of a best execution workflow and lifecycle.
Back in October 2016 we talked about feedback loops in the best execution process and applying the concept of marginal gains to improve performance. There are many decision points in the execution process where additional data-driven evidence can help the trader make a more informed decision, for example:
- What time of day should I trade?
- What size of trade should I execute?
- Who should I execute with?
- Should I trade principal or request the counterparty to act as agent for my order?
- If I trade principal, should I trade via the phone or electronically?
- If I trade electronically, which platform should I trade on?
- Should I hit a streaming firm electronic price, or should I trade RFQ?
- If I RFQ how many quotes should I request?
- How quickly should I trade?
To ensure any output from your Fixed Income analytics can be actioned it is essential to have the following components to your TCA/best ex platform:
1. Intuitive, graphical interface which allows large data sets to be queried and filtered quickly
2. Enough data to make the performance results significant
3. Timely information and analysis
4. Ability to get into detailed diagnostics on a trade by trade level if required
We were already aware how important the last point was whilst building out our FX product. However, Fixed Income, with its more complex analytics, has increased this requirement even further. For example, it is imperative to be able to understand where a specific cost number is coming from, down to an understanding of which benchmark bonds were used to construct the yield curve if there weren’t sufficiently non-stale prices available in the specific security.
So, is Fixed Income TCA measurable? For a large part of this diverse and complex market, we think it is. Is it perfect? No, but our philosophy has always been to be pragmatic, rigorous and thoughtful when building what is possible under the given constraints. Getting such a first iteration out and used by clients allows us to evolve and improve over time, whilst at the same time hopefully benefitting from improved availability, quality and coverage of market data. For example, who knows, but a Fixed Income Tape, as mandated by MiFID II, may even appear one day.
To quote Edgar A. Guest’s tremendous poem again,
Somebody scoffed, "Oh, you'll never do that
At least no one ever has done it."
But he took off his coat, and he took off his hat,
And the first thing we know, he'd begun it.
With a lift of his chin and a bit of a grin,
Without any doubting or "quit-it".
He started to sing as he tackled the thing
That couldn't done. And he did it.
Ok, in true BestX style, there hasn’t been a lot of grinning, and certainly no singing, but we have done it.
This brief article continues our series of case studies for the practical deployment of the BestX execution analytics software. Please note these case studies should be read in conjunction with the BestX User Guide, and/or the online support/FAQs. This latest article explores the Regulatory Reporting module of BestX, and how we feel this can be utilised to actually make the RTS28 reporting obligations under MiFID II to be of some value. We can hear the cries of derision upon reading this statement, but trust us, we do think that it is possible to make some of the reporting under RTS28 add real value to your best execution process and the overall goal of improving performance.
Generating the classic Top 5 report appears to add little value on its own – however, if this is produced in conjunction with a report that summarises performance according to your best execution policy (or in MiFID II speak, Order Execution Policy, as defined under Article 27(5)), then we enter the realms of added value. Assessing your top Venues, Counterparties and/or Channels (e.g. MTFs) on an overall performance basis and then marrying this up with where your actual volumes are executed can produce some interesting insights, and potentially actionable items to help improve performance going forward. This is one of the intended, as opposed to one of the many unintended, consequences of MiFID II and requires a systematic, and configurable, framework to measure and compare performance.
If you are a contracted BestX client and would like to receive the full case study please contact BestX at email@example.com.
This brief article continues our series of case studies for the practical deployment of the BestX execution analytics software. Please note these case studies should be read in conjunction with the BestX User Guide, and/or the online support/FAQs. Given that 2018 is drawing to a close, the focus of this third case study is the generation of year-to-date reports.
BestX provides a multitude of different reporting types, that have many different purposes and use cases. In this case study we cover 3 of the most common:
Generating annual post-trade performance reports – used for internal best execution review meetings etc
Generating annual exception report summaries – useful for showing documentary evidence of the implementation, monitoring and adherence to a firm’s best execution policy
Generating annual total consideration reports – useful for high-level cost summaries for client accounts etc
The flexibility of BestX allows users to run such reports on the entire portfolio, or configured to specific elements of the year’s trading activity to analyse key components.
If you are a contracted BestX client and would like to receive the full case study please contact BestX at firstname.lastname@example.org.
At BestX we are continuing to research methods for enhancing measurement of expected costs within the OTC markets. A parametric based model provides a good approximation, but runs the risk of becoming increasingly complex to try to model all edge cases, especially for less liquid currency pairs, or times of day or larger sizes. We have been conducting research in parallel to develop a new framework, which is purely data driven using machine learning methods. More work needs to be done but this paper provides results for this framework when applied to the measurement of the forward component of FX transaction costs, a notoriously difficult part of the market to model given the voice driven, OTC nature of this part of the market. For a copy of the paper please email us at email@example.com.
This brief article continues our series of case studies for the practical deployment of the BestX execution analytics software. To receive the full case study please contact BestX at firstname.lastname@example.org. Please note these case studies should be read in conjunction with the BestX User Guide, and/or the online support/FAQs. The focus of this second case study is the monitoring of trends in execution performance, a key component of the feedback loop required within a best execution policy. The BestX Trend module allows the construction of tailor-made analyses of trends in any of the different metrics that BestX computes, thereby providing flexibility to add value to any institution’s specific policy and process.
Monitoring execution performance over time, and learning lessons to help refine the policy and process further, is not simply something to satisfy any regulatory or fiduciary responsibilities. In our view it is a fundamental part of improving performance to improve returns, and can be applied to many constituents of the execution process, for example:
- Monitoring liquidity provider performance over time, by product, by ccy pair, by trade size, by time zone etc
- Monitoring algo product performance
- Assessing how different venues perform over time
- Are they any market structure changes occurring that are resulting in different execution methods performing differently e.g. RFQ vs streaming prices?
- Is market liquidity changing over time, e.g. is my business creating more impact than it used to?
We explored the concepts of feedback loops and marginal gains in a previous article, published in Oct 2016. This case study helps put concepts into practice using the BestX software.
To receive the full case study, please email us at email@example.com
Style is a simple way of saying complicated things
A common request from many of our clients over recent months has been to categorise algo products into a number of different styles. This is far from straightforward given the plethora of algo products now available (in BestX alone we have now seen 113 different algos across many providers), and also due to the fact that the market, and product innovation, continues to evolve rapidly. Given this breadth and complexity, it is probably pragmatic to start with a reasonably simplistic taxonomy of styles, which can always be refined over time. In this short article we introduce our initial family of algo styles based on a number of discussions with our clients.
To kick off, we are proposing 4 key algo styles, summarised in the diagram below, which stem from the initial key objective: is the algo attempting to achieve a specific benchmark or not?
For non-benchmark algos, we have suggested 2 style groups:
1. Get Done
Whereas for algos that are designed to minimise specific benchmark slippage, we have suggested an additional 2 style groups:
3. Interval Based
4. Volume Based
In addition, for each algo there are additional attributes that can be used to describe behaviour:
1) Limit – whether the algo had a limit price applied to it or not, and,
2) Urgency – a data field describing the urgency, or aggressiveness, of the algo. There are many different forms of Urgency used in the market, so to simplify, we are condensing into 3 values: Low, Medium or High
The additional attributes are important to allow an apples vs apples comparison, for example comparing a sample of algos within a category where some have hit limits and others have not could pollute the performance results. Equally, if you were analysing a group of Opportunistic algos, it would be preferable to stratify the sample into groups with similar Urgency settings.
Going into these 4 categories in a little more depth:
1. Get Done – this family of algos is expected to include more aggressive algo types where the priority is less on minimising market impact or earning spread, but more focused on getting a specific amount of risk executed as quickly and efficiently as possible. Many providers offer products that are named ‘Get Done’.
2. Opportunistic – this group is anticipated to include an array of products, that don’t have a specific benchmark they are looking to achieve. Algos falling within this group are expected to include the array of products in the market that are seeking to maximise spread capture, unencumbered by a strict schedule dictated by a benchmark. Urgency is often a key parameter within this style, determining how passive the algo is prepared to be in order to earn spread.
3. Interval Based – this group includes all algos that are attempting to minimise slippage to a benchmark where the algo slices the parent notional according to an interval, or time, based schedule. So, for example, this group would include all TWAP algos, the most commonly used algo within the FX market currently.
4. Volume Based – we anticipate this group to be the most sparsely populated given the largely OTC market structure of FX. Products such ‘Percentage of Volume’ or ‘POV’ would fall within this style, which are algos attempting to execute in line with a specified % of volume traded within the market. This algo style has been adopted from the listed markets, where it is obviously easier to target a % volume target given the availability of volume data. In FX, any target will be approximate given the inexact nature of total volumes traded at any given point. VWAP, or Volume Weighted Average Price, algos would also fall within this style, but again, they are less common in FX given the difficulties in measuring actual volumes.
The proposed taxonomy is, by construct, a simplification of a complex ecosystem of algos, which will result in compromises to be made when categorising products. Hopefully, the additional Limit and Urgency fields, will help with the grouping to some extent. The objective of this exercise was to propose something simple, pragmatic and understandable, whilst trying to provide a decent representation of the current algo product array. The Algo Style field will be going live within the next BestX release. We recognise this is likely to be an iterative process, but we felt it was important to respond to client demand and take the initiative. We do not seek to impose our view on the market, but as always hope for feedback, and expect that this concept will continue to evolve over time as we seek to represent the majority view in the market
This brief article introduces the first in a series of case studies for the practical deployment of the BestX execution analytics software. To receive the full case study please contact BestX at firstname.lastname@example.org. Please note these case studies should be read in conjunction with the BestX User Guide, and/or the online support/FAQs. The focus of this first case study is the identification of outlier trades, and the management of the workflow around identification, explanation and approval of such exceptions. We have explored this topic conceptually in an early article, ‘Red Flags and Outlier Detection’, whereas this case study further explores how to implement the concept in a practical context using the BestX software.
There is a clear consensus across the industry that the key element of any best execution policy is the process, and not necessarily individual outcomes of specific trades. The objective is to implement and monitor a best execution policy, and then refine it over time based on the iterative process of reviewing performance on an ongoing basis. A core component of this process is the identification of trades which are exceptions to the policy to help provide insights into where the policy may need refining. The BestX product allows an institution to ‘codify’ its specific best execution policy, allowing user defined thresholds to specify exactly which trades should be identified as exceptions.
At BestX we have now observed many different use cases for the exception reporting functionality, and not all are for compliance purposes. For example, systematic identification of particular algos that create significant market impact in a chosen group of currency pairs, may be a useful input into refining the best execution policy around algo selection. Common examples of exception reports include:
1. Notification of any trade where the actual spread cost incurred is greater than the estimated BestX expected cost
2. Identification of any trades that breach agreed commission rates to, for example, the WMR mid benchmark rate
3. Identification of trades generating significant pre or post trade market impact
4. Identification of any trades falling outside the trading day’s range
5. For a matched swap portfolio, identification of any trades where the cost arising from the forward points exceeds either the BestX expected cost, or a defined threshold
6. Identification of algo trades creating potential signaling risk, or significant market impact
Clearly, identification of outlier or exception trades is a critical component of best execution. It is also essential that when implementing such a process, flexibility is required, both in terms of which metrics you wish to monitor, and also the thresholds you specify for these thresholds, above which defines an outlier trade to your policy. We have learnt at BestX that there really isn’t market consensus or a ‘one size fits all’ approach to defining outliers across the industry. To receive the full case study, please contact us at email@example.com
Prediction is very difficult, especially if it's about the future
In the context of achieving best execution, there is a growing focus on the pre-trade stage of the process. Accessing the most appropriate information at the point of trade can help execution desks make informed decisions around their various execution choices, including timing and style of execution.
When making an informed decision at the point of trade, one key input is an understanding of the prevailing liquidity conditions, and if not trading via risk transfer, an indication of how those conditions will develop during the period that the order is worked in the market. This is not as straightforward as it sounds, even for a listed market where liquidity data is readily available. For a market such as FX, which has a complex, hybrid market structure, with liquidity fragmented over many venues, it is extremely difficult.
Hence the need to look for help outside of traditional financial theories and practices. At BestX we are always creatively looking to solve problems by thinking laterally and learning from other disciplines and industries.
We are aware that the statistical properties of trading activity in FX exhibit clustering behaviour, similar to volatility, in that periods of high trading activity are more likely to be followed by another period of high trading activity. This ‘memory’ effect is evident, for example, around key points in a trading day such as the release of key economic data, times of key benchmarks such as WMR etc. This behaviour led us to exploring a statistical framework that was first used to model the impact of earthquakes in terms of predicting the frequency and magnitude of subsequent tremors. This is analogous to the observed characteristics of trading activity and volumes, i.e. there is a ‘shock’, for example an unexpected interest rate rise from a central bank, which results in a significant spike in trading volume, which then proceeds to have ripple effects throughout the market over an ensuring period of time. Clearly, there would be significant value if it would be possible to have some form of estimate of how the ensuing liquidity conditions would evolve, both in terms of magnitude and frequency.
Before we explore such a framework further, lets just return to why this should be of any use whatsoever? Well, for example, if you are looking to execute a large block of risk, say 1bn USDJPY, and have discretion on the execution approach and timing, then having some idea of how volumes in USDJPY may evolve over the next hour could be a key input in your decision making. If you’re objective is to minimise spread cost, and you are therefore leaning towards selecting an algo designed to be passive in the market and ‘earn’ spread, key variable for the performance of such an algo will be the depth of liquidity available to the algo whilst it is working. If, from previous experience, you know that the algo you are planning to use performs particularly well in market states characterised by high volumes, then a prediction of volumes will help decide whether to press ahead with this strategy or perhaps select a different product. Or, you may be seeking to minimise market impact and wish to time the trade to coincide with a high volume period to help ‘hide’ the trade. Insights into when such periods may occur over the next few hours is clearly helpful in this decision making.
Back to earthquakes. A statistical framework called the Hawkes Process has been used for some time for modelling earthquakes. This framework relies on historical information but is ‘self exciting’, which is the technical term for describing the phenomena where when an event occurs, e.g. an earthquake, then the probability of more events occurring, i.e. tremors, increases. From a volume prediction perspective, this can be thought of as a spike in trading activity due to a data release, will generally increase in increased subsequent trading activity. Over time, assuming there aren’t any further events, then the trading activity will revert back to its normal state. The Hawkes Process attempts to capture this behaviour.
As a brief aside, just to further clarify a key point – we are not attempting to predict an actual earthquake, or market shock. We are trying to predict what the likely impact of such an event will be. We were once asked if BestX had a model for predicting ‘flash crashes’, the query was met with an incredulous pause. If we had a model for predicting flash crashes then it is highly likely that the BestX business model would be very different, and probably involve relocating to the Cayman Islands and making lots of money.
Going back to Hawkes. Importantly, the hierarchical nature of the framework allows for the capture of the effect of different volume distributions typically observed on different days of the week. Also, the amount of trading also varies, with the quietest day of the week usually seen on Mondays, and Wednesdays showing, on average, the largest volumes. The charts below display the average intraday volume profiles for USDCAD over the course of 2016, also highlighting how different the pattern is on days such Non-Farm Payroll days, where trading activity is concentrated around the time of the data release.
So, does it work? Well, still early days but the initial results look very encouraging. The charts below show the predicted volume profile compared to the actual profile (black line), and the shaded area represents the 95 percentile confidence interval for this prediction.
We are now going to put the research into a development environment which will allow a more systematic testing of the framework, across many more currency pairs and using a much deeper data set to ‘train’ the framework. In addition, we wish to model other key event days such as ECB, FOMC meetings days, and also month, quarter and year-ends. Assuming the framework continues to perform to our satisfaction, we will then look to incorporate this within the BestX Pre-Trade module at a later date.
“Turn and face the strange”
In our latest paper we continue our research into measuring the impact of the ‘turn’ within the FX markets. This somewhat strange phenomenon, which manifests itself around key dates throughout the year, is generally caused by supply and demand for funding by large financial institutions, which can create dislocations in the forward curves for certain currencies. In this latest research we empirically measure the impact of the turn around a range of different dates, including year, quarter and month end, but also event days such as NFP, FOMC etc. In summary, we find that the impact is most significant for year and quarter ends, with, for example, an average magnitude at year-end of between 0.6 and 1.5 bps for EURUSD depending on the tenor of the transaction. The work has helped us prioritise where to adjust the forward curve interpolation to better estimate mid for broken dates. To receive a copy of the paper please email us at firstname.lastname@example.org
“I love deadlines. I like the whooshing sound they make as they fly by”
MiFID II is here. From all accounts, January 3rd 2018 was similar to Y2K day, in that everyone woke up, went to work and, generally speaking, everything carried on as usual. It is clear that the ‘soft’ launch of MiFID II has resulted in no discernible disruption from a liquidity or execution perspective, but there are a number of looming elephants in the room that were postponed e.g. the additional 6-month grace period for assignment of LEI codes. So, those that were waiting for January 3rd to come and go in the hope that they could leave MiFID II behind them, and get on with their day jobs, are going to be disappointed. 2018, and probably beyond, will continue to have a significant MiFID II focus as much remains to be done.
One of the next key dates in the implementation timetable is April 30th, by which time, institutions will need to have submitted their RTS 28 reports. RTS 28 encompasses many aspects of the best execution obligations for an institution, and represents a large data gathering, cleansing and reporting exercise. That is burdensome enough, but it is further complicated by ambiguity in what exactly needs to be reported, especially for an OTC market such as FX.
If we look at the RTS 28 Top 5 report alone, which is the only RTS 28 report where the legislation provides a specific template, then ambiguity exists even here, and can be summarised in the following areas:
a) Venue vs Channel vs Counterparty
For the FX market, with a hybrid market structure of both quote- and order-driven activity, there is confusion over the definition of these terms. If you are executing an RFQ order, over a multi-dealer platform (e.g. FXall or FX Connect), with a panel of 5 liquidity providers then you could define the multi-dealer platform as the Channel, and the winning liquidity provider as the Counterparty. So, in this example, there is no Venue? But what if the multi-dealer platform is an MTF? Clearly, even in the simplistic case of an RFQ trade there is scope for confusion.
In the case of an algo trade, that has been initiated via a multi-dealer platform, with a bank, then additional complications arise. The bank’s smart order router will be directing the algo child fills across multiple venues, so in this case Channel, Counterparty and Venue for each child slice, at least, of the algo would appear clear. However, if the algo was spot and not linked to an underlying securities transaction, i.e. does not fall into the ‘Associated Spot’ category for MiFID II reporting purposes, then technically speaking this trade should not be included in RTS 28 reporting. But, once the algo had completed, what if forward points were then applied to the algo spot rate to roll the trade forward? The parent trade is no longer spot, and does now fall within MiFID II reporting requirements.
b) Passive vs Aggressive
Again, for the hybrid world of FX where there is still a very large proportion of quote-driven business, how should the definition of passive or aggressive be applied? Reading the regulatory text would indicate that any trade which has paid bid-offer spread is technically an aggressive trade, whereas ‘earning’ spread would constitute a passive fill. There are conflicting views in this across the industry. For many of our clients, these fields are generally ignored for FX if they do not execute any of their business via orders or algos, or have direct market access. For orders and algos, however, data is provided by the majority of liquidity providers on whether the order was filled passively or not. This is not yet consistently available across the industry yet, or provided in a consistent format, but is becoming increasingly prevalent.
For many mandates, FX transactions are ‘directed’ to a specific counterparty under the terms of the IMA. Such transactions should be split out and identified in the Top 5 report. However, many asset managers net transactions across portfolios, the net execution result of which is then allocated back across the individual accounts within the block. This can potentially result in complications whereby trades for non-directed accounts can be included in a directed block, as there was a benefit from a netting perspective, so the parent block can no longer simply be included in Top 5 directed field. This would need to be done at the level below, i.e. individual allocations or child trades, so the concept of multi-tier trade hierarchies are required.
Other reporting requirements
RTS28 is not just about supplying a Top 5 report. Analysis of the execution obtained across these Channels, Counterparties or Venues is also required with a view to understanding if there is consistency across allocated volume and performance. But the definition of performance is no longer simply ‘best price’. Indeed, the MiFID II definition of best execution refers to a range of factors, including price, some of which may be relevant to some institutions in they way they execute in a hybrid FX world, some of which won’t be. Clearly, these factors need to be defined, prioritised and set in accordance with each institution’s best execution policy. Only when this has been done can any view of overall ‘performance’ be measured, aggregated and reported.
Over time it is fair to assume that these ambiguities will decrease as market consensus develops and further guidance from bodies such as ESMA is provided, especially once a review post the first reporting cycle is concluded. In the meantime, however, institutions are figuring out for themselves. At BestX, our approach has been to take outside counsel advice from Linklaters, which has helped provide clarity on reporting requirements in addition to the Top 5 report (e.g. the approach taken to the associated performance reports), and also to ensure that the reporting software is as flexible as possible to accommodate different interpretations and requirements.
BestX allows an institution to define exactly what execution factors are relevant for their specific business and best execution policy. This allows a customised measure of performance to be constructed across any entity, including Channel, Counterparty and Venue. This framework forms the foundation for our Regulatory Reporting module, which allows a client to fully customise and configure exactly what they would like to include in their RTS 28 Top 5 report and also generates the associated performance reports. For example, some clients may wish to generate Top 5 reports for Channel, Counterparty and Venue. Some clients have made the decision to include all spot transactions, regardless of whether the trades are associated or not. Given the delay in LEI code assignment, we also allow reports to be constructed without this official designation to at least ensure that the first round of reports in April can be generated.
It is clear that regulators are looking for evidence of a best efforts approach to satisfying the reporting requirements, so a pragmatic and flexible approach is probably a decent strategy in these early months of a post January 3rd 2018 world.
 For younger readers, this relates to January 1st 2000, when the world waited with bated breath to see if computers would continue to function
 Please contact us if you would like further information on this legal opinion (email@example.com)
“Big Data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it.”
As part of our ongoing quest to enhance our analytics, and to continue to meet our clients requests, we have been spending considerable time over the last few months researching ideas to model the expected cost arising from the forward point component of FX transactions. Such a model would complement our existing framework for estimating expected costs for the spot component.
This research is far from straightforward. The FX Forward market is still a largely voice driven market, often with significant biases in pricing arising from supply and demand or basis factors. This results in a general lack of high quality data with which to perform rigorous modelling. At BestX, however, we do now have a unique data set of traded data that allows for such research and we hope this will provide the foundation for the production of such a model.
We have decided upon a 2 phased approach. Phase 1 will be a relatively simple, yet pragmatic, extension of our existing parametric model for expected spot costs. We plan to launch this in Q1 to meet the initial need for a fair value reference point for the costs arising from forward points. Phase 2 is a longer term project, which will take us down the road of a data-driven approach as there are indications that a parametric model will have limitations when attempting to model the foibles of the forward FX market. We are already planning for this and have started research into using machine learning methods, including nearest neighbour algorithms, to cope with the complexity of this market. As part of this research, one of the initial pieces of work was to try to understand what the key drivers for FX forward costs actually are as we are aware of the risks of utilising machine learning on big data sets without an understanding of the fundamentals. We have summarised the initial findings of this work here.
Not everything that can be counted counts, and not everything that counts can be counted.
The demand for transparency within the execution process has increased significantly over recent years within the FX market. Indeed, BestX was founded to try to help meet this demand and we have adopted this theme within everything we do. We set out to build a market-leading set of analytics and ensure that all of our clients have total transparency around the workings of these models. Such analytics can only add real value if they are powered with the highest quality market data. Transparency around the market data inputs used is therefore also critical and we have invested significantly in order to build a comprehensive view of the FX market. This article explores some of the thinking behind our approach and why we believe it is important to generate the most broad, independent and representative view of the market.
In an OTC market such as FX one of the biggest challenges when trying to compute accurate execution metrics is gathering a data set which fulfils the following criteria:
Below are some of the common themes and challenges in building such a data set.
· Breadth and independence of data
One of the most common topics when discussing market data and benchmarking is the breadth of sources used and the independence of such sources. Independence and the complete absence of any bias is critical in delivering a market standard for FX best execution metrics. Computing a mid based on such a broad array of liquidity providers globally is far more valuable than generating a potentially skewed mid based on a specific sector of the market. For example, if a mid were computed based on liquidity sources biased towards to non-bank high frequency traders, this would clearly be inappropriate for use in estimating costs for large institutional asset managers. BestX takes market data from over 100 liquidity providers, supplied to BestX through a number of pipes, one of which is the Thomson Reuters pipe in addition to ICE and EBS. Thomson Reuters is not the only source, and even if it was, it is not a single price as data from all of the individual liquidity providers is accessed.
· Generating benchmarks based on client specific liquidity providers
This is an interesting point and one which we debate frequently. Aside from the fact that regulations such as PRIPPS stipulate gathering data from as representative set of sources as possible, we believe that for the institutional market it is important to portray a view of the total market. To simply compute costs based on a client’s specific liquidity sources is self-reinforcing and could be argued is not satisfying best execution as perhaps there are other sources out there that a client could access but currently doesn’t? In addition, there is a growing demand for one level playing field to compute costs across, that could be used to meet demands for, for example, peer analysis. If the market data set is tailored for each client in this universe then we would always be comparing apples and oranges.
At BestX, we do also provide the ability for clients to submit their quote data, which we will use as additional benchmarks if so desired, as some best execution policies require this. However, we provide these metrics in addition to the spread to mid costs based on the full market-wide data set.
· Internal pools
We would argue that, even if it were available, would data from liquidity provider’s internal pools add any value when trying to assess price discovery and generating a market mid? The price forming data and flow is available via the lit electronic marketplace, where liquidity providers risk manage the ‘exhaust’ of their inventory. The activity of internal pools is interesting, although would not add value in determining the market mid at any one point in time, e.g. having offsetting trades match and internalise wouldn’t necessarily change where the external market is trading.
There is clearly significant value within the overall best execution outcome through internalisation, and we measure this via other factors to demonstrate this value (e.g. through post-trade market revaluation and impact metrics).
· Timeliness of data
There is a lot of focus on market data sources and independence, and rightly so. In addition, however, there is also a requirement to ensure that data is timely, especially in the FX market. Using stale data, for example, snapped at 1 or 5 min intervals or worse, can obviously potentially generate erroneous cost and slippage metrics. It is imperative to be gathering data on a millisecond frequency and in real-time to allow for immediate transaction analysis if required.
· The FX Tape and other potential sources
The recent announcement of the launch of a tape for the FX market is an interesting development. Clearly, this is an initial step and there are many questions still around exactly what will be available, at what cost and with what lag. It may be that it could provide BestX, and all other providers, with an additional ‘official’ source of traded price data, although for it to be truly representative it will require all of the large liquidity providers to participate fully. This would, obviously, be extremely valuable and could be used in addition to the broad market data set we already consume and aggregate.
Equally we will be following the evolution of what trade data becomes available via the APAs once MiFID II goes live. It is unclear at this stage exactly what will be available and how timely the data will be, but it could provide an additional source. The trade data that became available following Dodd-Frank disappointed to some extent as it wasn’t rich enough to use for rigorous analytical purposes, so we are reserving judgement on the potential data riches that may flourish from MiFID II until we can actually see it.
We don’t generate pools of liquidity adjusted for different credit quality or capacity. The philosophy is to generate a representative picture of the institutional market that can be broadly applied to compare and contrast performance and cost metrics. Additional benchmarks can be customised on a bespoke basis to service specific liquidity pools if required.
OTC markets make the provision of representative, accurate TCA metrics difficult. FX doesn’t have a National Best Bid and Offer (NBBO), there isn’t a source of public prints and there is little consistency across the industry in terms of what data is made available. The current situation may obviously change over the next few years, for example, via the FX tape or a shift to an exchange based market structure, but it seems unlikely to happen in the medium term. We have taken the pragmatic, and rigorous, approach to gathering as much high quality data that we can and use it in a thoughtful way across a suite of analytics. One of the core tenets of BestX is the delivery of an analytics product that is totally free from any conflict or bias. Independence and total transparency is therefore critical, both in terms of the analytics and input market data.
The release of the final Global Code of Conduct (“Code”) on 25 May 2017 is a watershed moment for the foreign exchange (FX) market. The FX market, which is a global decentralized market for the trading of currencies, is the largest market in the world in terms of trading volume, with turnover of more than $5 trillion a day. The Code was developed by the Foreign Exchange Working Group (“FXWG”) working under the auspices of the Markets Committee of the Bank for International Settlements (“BIS”). The Code was also created in partnership with a diverse Market Participants Group (MPG) from the private sector. A Global Foreign Exchange Committee, formed of public and private sector representatives, including central banks, will promote and maintain the principles.Read More
“It is not what we choose that is important; it is the reason we choose it.”
Best execution is not simply about measuring transaction costs, and other relevant metrics, after a trade has been executed. Best execution is a process, whereby informed decisions are made throughout a trade’s lifecycle in order to achieve the best possible result for the client. Clearly, a key stage in the trade lifecycle is ‘pre-trade’, which we will explore in more detail in this article.
As we have touched upon in previous articles, the modern foreign exchange market is a complex beast, providing participants with many different methods of execution. For example:
1. Risk transfer over the phone
2. Request for Quote (RFQ) on a multi-dealer platform
3. Request for Stream (RFS) on either multi-dealer or single dealer platforms
4. Algorithmic execution
Within each of these methods, there are a multitude of factors, and therefore additional decisions, to consider. For example, if you are employing RFQ, how many liquidity providers should you request quotes from and which ones? Or, if you are considering algorithmic execution, how do you select from the extensive range of products now available, and when a specific product is chosen, how should you select the parameters to use? In addition, do you want to access the market directly and have your liquidity provider place orders on your behalf, or do you want to simply execute with a counterparty as principal? If the former, are there specific venues you would like to access? The decision making process can become quite complex, analogous to deciding which chain of coffee shops to pop into on the way to work, deciding upon Starbucks and then having to select from the fatuous list of types of coffee, milk, sizes, temperature and strengths.
In our view, best practice is to not to necessarily exclude any specific execution method, although not to create a Starbucks situation of too much choice which can result in paralysis in decision making! Its ok, I’ll just have a Tetley’s instead. Each method can add value, and be the appropriate choice, for a given trade, with specific trading objectives within a particular set of market conditions. There may be occasions where a large block of risk needs to be executed quickly, and quietly, and in such cases voice risk transfer may be appropriate with the optimal liquidity provider, who can warehouse and manage such inventory. There may be other occasions where the objective is to minimise spread paid, and selecting an appropriate algo may be the optimal solution. Deciding not to have algos on your ‘menu’ of execution methods due to the added complexity and problems in selecting a specific product from a specific provider should be not be a deterrent. Such products can add significant benefits to the best execution process in terms of cost savings.
Analytics, data and technology can help simplify this process, and in particular pre-trade analytics.
Reading through MiFID II, and other initiatives such as the Global Code of Conduct, doesn’t provide a detailed specification of what is expected or required when it comes to pre-trade analysis, at least from a best execution perspective (N.B. we’re not covering here the pre-trade reporting and transparency aspects of MiFID II, we are simply focusing on how pre-trade analysis can help deliver against the definition of best execution). In the absence of anything official, we thought it might be useful to put some thoughts together on what best practice may look like, at least for FX in the first instance.
It doesn’t seem to make sense to perform value-added pre-trade analysis on every single trade. Execution desks trade hundreds of FX transactions every day and it is not practically feasible to conduct what-if analysis on every single order. This is where the positive feedback loop from the post-trade process should cover the majority of the smaller, or more liquid, tickets, as discussed in previous articles. A periodic assessment of execution performance allows checks to be carried out on whether any further changes need to made to manage and optimise the decisions for the bulk of the flow. Having said that, if it is possible from a technology perspective, it would be valuable to have a pre-trade benchmark, such as the fair value expected cost, calculated for every trade to allow an ex-post comparison.
So, let us focus on value-added pre-trade analysis for now, defined whereby the user performs scenario, or what-if, analysis on a specific trade defines the universe as larger trades, and trades in less liquid currency pairs. Guidelines for defining what constitutes a larger or less liquid trade could be included in an institution’s best execution policy.
2. Analysis to be performed
Timing of trade
This is obviously only of interest for trades with discretion around timing. Many FX trades are executed without this discretion, e.g. a 4pm WM Fix order or where a Portfolio Manager requires immediate execution to attain a specific current market level. However, if there is discretion, then the impact on cost can be significant. Pre-trade analytics should allow a user to compare costs for different execution times over a given day. For example, on days with relatively low volatility and little price direction it may be beneficial to wait and execute during times of higher liquidity. This issue of market risk is covered later as taking into account potential ‘opportunity cost’ is clearly critical in such decision making.
Sizing of trade
Another common theme that requires analysis is determining the ‘optimal’ size to trade. Again, there may be little discretion here, but if there is flexibility, then scenario analysis can add value given how costs fluctuate by size. The issue can be fundamentally thought of as ‘how quickly can the market digest my risk’. There is often a misconception that the FX market is so deep and liquid that such questions really shouldn’t be a consideration, often citing the BIS survey’s $5 trn of volume traded per day. However, in reality, we often see examples where relatively small tickets can sometimes create significant market impact and footprint. The FX market is generally liquid compared to other asset classes, but it is also fragmented with a lot of liquidity recycled across venues and liquidity providers. One could argue that the issue of declining risk appetite, and hence inventories, at market makers due to the regulatory environment may start to reverse given the changed administration in the US, which may help improve the conditions for executing larger sizes. However, it is clear, that care should be taken when determining the notional sizes to execute, even for liquid pairs. Pre-trade analysis on costs by size, and also information on prior executions of similar sizes to see what has worked well and what hasn’t at different times of the day, can be extremely valuable.
As alluded to in the introduction, there are now many methods of execution available. We have seen a significant increase in the use of algos across both institutional and corporate clients, which in itself creates the problem of product selection. Such products can provide benefits in the form of cost savings, when viewed on an overall performance basis net of fees. However, there are risks, such as the obvious one that the market simply moves against you whilst the algo is working. This market risk is part and parcel of working any order, so some form of quantification of the possible cost of this is useful in a pre-trade environment to allow an informed decision to be made. Risk transfer may be preferable if the market conditions are unfavourable for working your order via an algo. Having the market move away from you may be simply down to bad luck and the random walk of the FX market, but not always. If your order is being worked in a way that is generating signalling risk then there may be market participants trading ahead of your order, resulting in less favourable execution. This may happen for many reasons, including through poor product design, simplistic smart order routing, inappropriate sizing, incorrect product selection for the time of day and currency pair. Having metrics available in a pre-trade environment that, for example, quantify market footprint and signalling risk for similar trades in the past can help in the selection of execution method and product to mitigate such risks.
A common question when deciding to trade over a period of time is “how long”? Especially, if the trade does not have a specific objective of tracking a particular benchmark. For example, when trading an algo over the WMR fixing window, with the specific objective of minimising tracking error to the Fix, then the duration should match the window. Or, if a passive equity portfolio is rebalancing and the objective is to achieve as close to an average rate over the same window of time that the equity exchanges are open, then the duration of the FX trade should match. However, if there is discretion over setting the duration, then pre-trade analysis can add value as there are conflicting forces at play. If you trade too quickly, you may create unsatisfactory market impact whilst minimising the time that the market has to potentially to move against you, defined as opportunity cost. Equally, if you trade too slowly, then you may minimise market impact but you may run significant market risk, especially in a high volatility environment, potentially resulting in adverse opportunity cost. Figure 1 below illustrates the conflict.
Net or not to net, that is the question. Unfortunately, not an easy question to answer. There is no simple yes or no, it really does depend on a number of factors, including available liquidity and therefore spread cost, together with prevailing market volatility. As above, there are once again competing forces at play. If liquidity is good, and volatility is relatively high, then it may make sense not to wait too long for offsetting orders to benefit from netting as the opportunity cost from waiting could more than outweigh the potential cost savings from crossing spreads less frequently. If, however, volatility is relatively low, and liquidity is poor, then it may make sense to wait to net orders as in this scenario the opportunity cost may be less than the spread savings. This gross simplification is portrayed graphically in Figure 2 below.
So, in essence, the answer is, ‘it depends’. It would therefore be valuable to have some form of netting analysis incorporated within the pre-trade stage of the process to help evaluate this on a case by case basis.
3. Results storage
So, you’ve done all the analysis and executed the trade. Now what? In our view, best practice should be that such analysis is saved and stored for the specific trade. When you go back into your post-trade analysis, how valuable would it be to have the trades tagged with the associated pre-trade analysis you performed? This then allows a comparison of performance on a post-trade basis with the pre-trade analysis, e.g. did choosing that particular algo perform as expected? This feedback loop is valuable as it allows the decisions to be assessed and then adjusted in the future to improve the result even further. Spending the time to perform pre-trade analysis is not about ‘ticking a box’, it should be time well spent to help add additional value to the execution process.
Pre-trade is a core component of the best execution process. The increasing focus on best execution from a regulatory perspective has propelled pre-trade into a more mandatory status, rather than a ‘nice to have if we have the time’, although one could argue it was never a ‘nice to have’ given the value it can bring to execution result for the client. However, everyone is busy, very busy, all of the time, so incorporating pre-trade in a more systematic fashion requires technology to automate as much as possible. Trades should be prioritised such that only those where significant value can be added are focused on. And you should learn from past performance. Not necessarily in a machine-learning perspective, but simply have at your fingertips previous experience summarised in a form that allows quick, informed decisions to be made. Improving execution systematically requires the use of ‘smart data’, not just ‘big data’.
 “Feedback loops and marginal gains – using TCA to save costs and improve returns”, Pete Eggleston, BestX, Oct 16
 “Applying the Pareto Principle to Best Execution”, Pete Eggleston, BestX, Feb 17
 Signalling Risk – is it a concern in FX markets?, Pete Eggleston, BestX, July 2016
BestX launched its first product last September, delivering a comprehensive set of analytics and reporting for post-trade best execution in FX. The software was designed to satisfy internal and external best ex requirements, including regulatory reporting requirements, often referred to colloquially as ‘box ticking’. Perhaps unfairly given this is a vital component of the fiduciary responsibility of asset managers to asset owners, and more broadly, of increasing importance to all FX market participants given the Global Code of Conduct and other initiatives. However, this article seeks to explore the value that the software can bring over and above the core ‘box ticking’ requirementsRead More
Although there is still considerable debate on exactly what ‘best execution’ means in the FX markets, one component that has become clear is that any best execution policy should include a process to identify, monitor and record outliers. The question now arises – how should I define what is an outlier? As with most things, as soon as you start getting into the details it becomes clear that this is not necessarily straightforward and involves a number of factors. In this article, we explore these factors and suggest some approaches for what we are seeing at BestX evolve as best practice.Read More
Continuous performance improvements, whereby all aspects of a process are examined with precision, are the hallmark of many leading teams and businesses. Seeking out such marginal gains, as exemplified by Sir Dave Brailsford with the GB cycling team, or the Formula 1 team of Mercedes McLaren, has now become commonplace, and in this article we explore how such approaches can be applied to a continuous refinement of best execution.Read More
This report breaks down the texts of the relevant FCA and MiFID II regulations, with a particular focus on the practical implications for the fixed income and FX markets. It also provides insight into how the MiFID II best execution requirements are of relevance even to those products and companies outside the technical scope of the legislation, and in many ways set the new standard for how best execution should be monitored and assessed. Lastly, the importance of new technologies and rigorous data analysis in this new era of best execution compliance is emphasized.Read More
There are many different terms and methods used to describe and analyse the costs and associated performance of execution. There is an element of choosing the right tools for the job, and some market participants may require a less extensive range of metrics to measure costs and performance. However, there are some fundamental elements that form the foundations for any meaningful analysis and in this article we explore these core components.Read More