Factors to consider when implementing a TCA framework

Transaction cost analysis. Execution quality analysis. Performance benchmarking. Best execution. There are many different terms and methods used to describe and analyse the costs and associated performance of execution. Clearly, there is an element of choosing the right tools for the job, and some market participants may require a less extensive range of metrics to measure costs and performance. However, there are some fundamental elements that form the foundations for any meaningful analysis and in this article we explore these core components.

It is essential to take into account the size of the trade

It sounds very obvious but it is a key component of making an informed decision around the quality of the execution. For example, you’ve traded 500m EURUSD at 1pm today. Simply referencing an estimate of the market mid at 1pm may result in a cost number of 12 basis points. But how do you know if this was ‘fair’? And how do you compare the performance of this execution to that you received on 100m AUDUSD at a similar time with a different counterparty? The 12 basis point cost may have been an extremely competitive cost for that size but you don’t know that unless you can reference a framework that provides a consistent, fair value cost measure for this amount of risk, in this currency pair and at that specific time of day. Clearly, a consistent and level playing field is required.

What happens if the time stamp isn’t 100% accurate?

A very common problem. Especially for trades executed by voice, where an approximate time stamp may be recorded although based on beta testing so far, BestX have seen cases where such time stamps can have an error from a few minutes to several hours depending on the rigour of the systems and processes. If you base your TCA on only one metric, i.e. a slippage to mid estimated using a possibly erroneous time stamp, then the results can be potentially very misleading. In such cases, other values can at least provide some useful information. For example, even if the time stamp is say 15 minutes out from the actual time of execution, if you measure what the fair value for the cost should have been for the erroneous time then at least you have something to compare the execution to.
This is illustrated with an example below. Using a simple slippage measure to mid for this trade would have given a cost of 42 basis points, based on the erroneously reported time stamp. Was this cost acceptable or not? In this example, we have estimated the cost for the trade at the reported time stamp, so you can at least get some feel for the quality of the execution. In this case, the estimated fair value cost at 11.15am is actually slightly more than the actual slippage estimated at this time, indicating that the cost seems reasonable.


One could argue that if an accurate time stamp is not available, then a simple analysis of where the trade was filled in relation to the observed range of the day is adequate. Indeed, this may provide some comfort around the execution although, in 2016, we feel that it is possible to do better than this. Clearly, the preference is always to have accurate time stamps, and therefore accurate cost estimates, but we know we live in an imperfect world where such data is not always available. Hopefully over time the precision of available time stamp data becomes more accurate and standardised, but until that time, it is important to have a framework that at least allows some measure of execution quality.

Using representative market data sources

Another factor to take into account is the range of sources used to construct the reference market data. It is important that the data used provides a broad, representative view of the market and does not have any inherent bias. For example, if the estimated market mid data that is used to calculate slippage from is sourced from high frequency trading firms, it may not be representative of the prices actually obtainable by a pension fund. 

Coping with more complex execution products such as algos

One of the key trends witnessed recently, and seemingly accelerating, is the move of the FX industry towards a more order driven market. Many more FX market participants now have access to execute via the rapidly growing array of order and algo products. For such execution methods, a simple cost estimate vs a market mid really only illuminates one component of the best execution story. The example below illustrates the point that one metric does not necessarily provide an indication of the quality of the execution received. This particular algo generated significant signalling risk, which effectively pushed the market up, resulting in the client receiving a higher average price than would have been the case with an algo which performs better in terms of hiding its footprint. Supplementing the headline cost numbers with a range of metrics provides the client with a more complete picture, thereby allowing much more informed decisions to be made over time in terms of execution product/venue/counterparty selection, ultimately resulting in significant cost savings.


The visualisation of results, allowing informed decisions to be made intuitively and quickly, offers significant benefits, especially when dealing with large complex data sets. Ploughing through thousands of rows of data in Excel or pdf reports is not only a time-sink, but prone to error. Ultimately, if the output does not provide actionable intelligence then there is a risk that the system does not actually get used and the full benefits of monitoring your execution quality are never realised.

Security of results/reporting

With an increased regulatory scrutiny on best execution and transaction costs, it is imperative that the analytics used are not only independent and free of any bias or conflict of interest, but also delivered in a totally secure environment. All data, both input trade data and all outputted results, obviously needs to be stored in an encrypted state and meets the highest industry standards (e.g. AES 256). The FCA have recently announced that it supports the use of Cloud services for Financial Services .

Compliance with the regulatory environment

The MIFID 2 definition of best execution states that an executing counterparty must achieve the ‘best possible result’ for the client, taking into account a range of execution factors. It does not state ‘simply measure the cost as a slippage to the prevailing mid-rate’. The ability to implement a documented best execution policy, monitor it and manage the workflow around exceptions to this policy is essential. Linking such a process, and the reporting of exceptions, to your TCA system such that it satisfies multiple requirements (i.e. business requirements to monitor and save costs, regulatory requirements regarding best execution and the fiduciary responsibility to asset owners) clearly provide benefits from an efficiency and consistency perspective. It would, therefore, seem sensible to future proof your selection of TCA vendor to ensure that it is compliant with the future regulatory environment post the implementation of MIFID 2 in January 2018. 


The need for measuring, recording and justifying best execution is clear. Regardless of the complexity of the FX execution process, there are a core set of components that should be considered when selecting and implementing a set of analytics. For those market participants that only trade FX by voice, or via a custodian, the issues of time stamp availability and accuracy create a nuanced need for measures other than simple slippage cost estimates to mid or range of the day analysis. For those participants that use products such as algos, it is even more important to measure a range of metrics to provide insights into the true execution performance. Many clichés spring to mind, but ‘lies, damned lies and statistics’ seems appropriate as there is clearly a risk of making incorrect decisions if the output from a TCA framework is limited or incomplete. Measuring costs and execution quality is not as straightforward as one might imagine, and to mitigate the other obvious cliché of ‘garbage in, garbage out’, it is important to be mindful of the potential pitfalls.