How to make sure you are comparing apples with apples – complexities of peer analysis
A horse never runs so fast as when he has other horses to catch up and outpace.
We’ve been asked in the past to provide functionality to allow an institution to compare its execution performance against peers on a relative basis. This practice is widely used in equities and is often discussed and used at board level of both managers and asset owners. Applying peer analysis in a market such as FX is more complex in our opinion, largely due to the heterogenous nature of the participants who transact FX. As we have discussed before here, the fact that there are many different trading objectives arising from the varied nature of the FX market participants, means that it becomes more complex when applying a simple equity peer analysis to FX.
Like all BestX enhancements, peer analysis has been backed up by academic research. Recently, the ECB and IMF released a paper that indicated that transaction costs in the FX market are dependent on how 'sophisticated' a client is. They found that more sophisticated clients, e.g. ones that trade larger and more numerous tickets with a wider variety of dealers, achieve better execution prices than those that are less sophisticated. This highlights the need for another level of comparison; you might be performing well against a benchmark, but are you doing as well as your comparable peers against such a benchmark?
Classically, within peer analysis, you may decide to analyse net trading costs versus a specific benchmark (e.g. for equities, VWAP is commonly used) for a sector of the institutional community (e.g. ‘real money’ managers). However, for FX, the ‘real money’ manager peer group may be trading FX for many different reasons, which requires the use of a varied range of benchmarks and metrics to measure performance appropriately.
For example, within the real money community, many passive mandates may be tracking indices where NAVs are computed using the WMR 4pm. In this example the manager is very focused on minimising slippage to the WMR Fix benchmark, and other benchmarks such as Arrival Price or TWAP, may be irrelevant. It would therefore be inappropriate for a passive manager, whose best execution policy for these mandates is aimed to minimising slippage to WMR, to be compared to their peers on performance versus, for example, arrival price.
Enter the BestX Factors concept. BestX Factors was one of the core concepts upon which our post-trade product was built, allowing a client to select the execution factors, and performance benchmarks, that are relevant to their business and execution policy. Through BestX Factors, clients can select specific benchmarks, and apply these if required to only specific portfolios, or trade types etc. The BestX Peer Analysis module is also governed by BestX Factors, allowing clients to construct Peer Analysis reports that are specific to the benchmarks relevant to their style of execution.
Furthermore, a static report providing high level relative peer performance only provides a broad picture and can mask key conclusions that may help identify where attention and resources should be focused to help improve performance.
For example, having the ability to inspect the peer results by product adds an extra layer of value. It may be that a specific client is performing extremely well versus the peer group when it comes to the Spread Cost for Spot trades but may perform less well for outright Forward trades.
In addition, further breaking out results by currency pair helps isolate other aspects where performance may be improved. Results vs Arrival Price for G10 may rank in the top quartile, but NDF performance may look less impressive when compared to the peer group.
In our view, therefore, it is important that any Peer Analysis is done in a way that allows such granular inspection of the results, thereby allowing the product to add real value rather than simply tick another box, ‘yeah, sure, we do peer analysis, we consistently come in the top quartile’. Nice to know, but as with our philosophy in general across the BestX product, it seems sensible to use the analysis to add real value if possible.
A challenge of such granularity, of course, is ‘analysis paralysis’ as the amount of data can become overwhelming quite quickly. Nobody has the time to search through tables of results, trying to figure out the good and bad bits and what to do about it. We turn to another of our core philosophies here, which is turning big data into actionable smart data. Visualisation is critical in achieving this, and we return to our ‘traffic light’ concepts to help quickly highlight what is going on in a given portfolio of results.
The trophy icons simply indicate when a client is on the podium in terms of performance for that particular metric, whereas the traffic light colour indicates which percentile band the performance falls into.
One of the other key factors to consider, when conducting peer analysis, is the size of the pool of data to ensure that any output can be analysed with confidence. For this reason, we have waited to launch our Peer Analysis module until our anonymised community data pool for the buy-side had become large enough. We feel we are now in that position, with a buy-side anonymised community pool comprised of millions of trade records. We therefore launched our new Peer Analysis module for the buy-side as part of our latest release last weekend.
In conclusion, comparing performance to peers across the industry provides an additional input to the holistic view of best execution that the BestX software seeks to provide a solution for. It should be used in conjunction with the other metrics and analysis available throughout the product (for example, the fair value expected cost model) and clearly only focuses on relative performance. Peer analysis should, therefore, be used carefully, especially when applied to FX, due to the very heterogenous nature of the market. The concept of client ‘sophistication’ is interesting and one that we are exploring further to see if that can be added at a later date to provide an additional clustering of the data (i.e. to allow clients of similar sophistication to compare themselves).