Saturday, August 12, 2006

EOD processing

When trading online in the fast paced word of international finances, then the actual trading system is, needless to say, important as this is the direct interface towards the clients, and if the experience, not only in terms of the actual product offering, but also in terms of stability and availability is not living up to what can be expected of such a system, then the clients will simply take their money and go somewhere else.

A less obvious item is how the investment bank handles the EOD or End Of Day processing. The EOD process (I’ll just refer to this as the “EOD”) is, as the name indicates, all the stuff that goes on after the end of a trading day: clearing, settlement, netting, calculation of interests and fees and … the list goes on and on.

There are many requirements to the EOD and I will focus on two on them: speed and service disruption. Let us begin with the later. It may seem obvious, that the EOD, should not cause a disruption in the service and availability of the online trading capabilities of the clients. This, unfortunately, is far from the case. I’ve worked with a lot of major banks, and – believe it or not – most of them are actually “down” for a period of time during their EOD. It may only be a few minutes and it may be as much as several hours, but they never the less have to stop servicing their clients during this interval.

If we are talking Non-FX, and especially physical equities, the problem would not be so bad, as it would be possible to find a time during which, all the major markets were closed, notably after close of the US until the open of the Asian markets.

If, however, we are talking FX and a lot of the future exchanges, then the trading goes on around the clock, from Monday morning at 05:00 Sydney time until Friday afternoon at 17:00 New York time. If you are interested in the specifics then they are:

Open: Monday morning 05:00 Sydney.With northern wintertime this corresponds to: Sunday 19:00 Copenhagen (CET), 18:00 London, 18:00 UTC, 13:00 (1 pm) New York.With northern summertime this corresponds to: Sunday 21:00 Copenhagen, 20:00 London, 19:00 UTC, 15:00 (3 pm) New York.

Close: Friday afternoon 17:00 (5 pm) New York.With northern wintertime this corresponds to: Friday 23:00 Copenhagen, 22:00 London, 22:00 UTC, Saturday morning 09:00 Sydney.In the week with EU, but not US daylight saving this corresponds to: Friday 24:00 Copenhagen, 23:00 London, 22:00 UTC, Saturday morning 08:00 Sydney.With northern summertime this corresponds to: Friday 23:00 Copenhagen, 22:00 London, 21:00 UTC, Saturday morning 07:00 Sydney.

Who invented the concept of summer-/wintertime anyway …

As Saxo Banks main area of operation is FX, we have always been geared towards being open for business in the above time interval. Once a day we down load all positions from the Front Office Trading system till the Back Office system to do the aforementioned EOD. As the positions had been processed they were uploaded back to the FO Trading system to be available for the viewing of the clients. Apart from a locking the actual positions during the upload, if they are netted out, the client will not suffer at all and trading can continue. We used to perform the download/upload once a day. At around 02:00 CET we would download all positions, and upload them again once processed as just mentioned. This MO, however, will in fact delay the correct trading information to the clients in the Asian region for a trade day, as they are several hours ahead of us – living in the CET-time zone. Something had to be done …

This leads into the second (or first, depending on how you look at it) issue: speed. The processing of all the trades could take several hours. The original system had been devised when we did a few hundreds or maybe thousand trades a day. Now we do close to doing hundred thousand trades every day, and the number keeps going up.

To solve these problems (timely reporting of account statements and continued scalability of the system) we have introduces a new Middle Office system (I must immediately say that I in no way can take the credit for devising or constructing this system – that should rightfully go to my esteemed colleagues in the BO Systems group).

What is done is the following: every time a trade is registered in the FO system an “event” is send to the Middle Office system, hence a complete picture is keeps here of all client accounts. All BO operations is then carried out and the client is (almost) in real time able to see the correct and updated status of her account statement.
It may seem obvious that people/clients/institutions/banks wishing to participate in the global FX market can do so during its entire opening period, and that they, in real-time, should have access to on updated account statement. However, it turns out that this it no way obvious and unless the institution in question has thought good and hard about it, it may never happen.

Friday, June 30, 2006


I'll be in London 11-13 of July for the quarterly EMEA meeting in FPL.
Drop me a mail if anyone would like to go out for a drink.

Saturday, May 20, 2006

Trading Spot FX via FIX

The following article will appear in the next issue of FIX Global (


Trading Spot FX via FIX

In this article, I will address trading spot FX using the FIX protocol. I’ll begin with some general views on using FIX for this purpose and will then look at two specific cases using FIX.

The Global Technical Comittee is currently working on a gab analysis of FX trading. They will most likely settle upon using the Market Data Request message for trading on streaming quotes (SQ),  only using the Quote Request message for request for quote (RFQ) trading. This is to differentiate between the two “ways” of trading. My own personal preference, is to differentiate between order placements and trading. For that reason, I favor using Quote Request methodology when trading on SQ (or RFQ), as the Quote Response message can be used, as opposed to the Order – Single message, which should be used with the Market Data Request methodology. The names of the two methodologies (quote vs. market data) also makes it easier to understand if we are talking about trading or order placement.

Please note, the Quote Response message is only available in version 4.4.

Regardless of the methodology used, best practice dictates that the trading session should be separated from the quote session. The reason for this is to avoid requesting old quotes using the Resend Request message, should one receive a sequence number that is too high. On the trade session it is paramount that all messages are received, but the same does not really apply on the quote session, as old quotes are basically useless.

Another best practice is that a trade should always trigger the send-out of a new quote (with a max trade amount). In this way, the receiving end is always aware of the amount of liquidity available for a given currency cross.

Prior to version 4.3, FIX did not really support FX trading. Saxo Bank has built their B2B FIX server around version 4.3, but will extend this to also offer support for version 4.4 in the near future. It is, however, quite possible to do FX trading over 4.2 as we will see below.

At Saxo Bank we have around a dozen liquidity providers, most of which supply prices using their own proprietary API. A couple of them use FIX, however. We will look more closely at two that represent two quite different ways of utilizing the FIX protocol.

So why is it interesting that liquidity providers use FIX when sending us prices? Can’t the proprietary API be just as good? The answer to the latter question is “yes.” There is nothing preventing us from bringing on a liquidity provider, just because they are not using FIX. That said, there are a number of advantages to using FIX (and assuming they are in compliance with the protocol).

From a development perspective, we are spared the hassle of “learning” and understand yet another way of integration., cutting down the time it takes to bring on the provider from months down to weeks. We actually had a case where a provider had been unable to begin integration work for another six months due to resource constraints. Instead, the provider moved from their own API to FIX and was then brought on three months earlier.

We face the same issues from a support perspective. Business support and operations do not need to familiarize themselves with a bunch of new error messages, disconnect scenarios and general troubleshooting, if the API is entirely new to them. With FIX they know what do to if something does not work as expected. Time is saved resolving any problem that may arise, which given the fast pace of FX trading can mean the difference between winning or losing money.

Let’s take a look at the two case studies:

One provider uses version 4.2 of the protocol. Quotes are delivered using the Market Data – Incremental Update message. The message has been modified with a custom tag to contain the QuoteID. The FIX protocol is an open standard, and there is nothing wrong with extending individual messages as long as this is done with custom tags - e.g. the QuoteID in the above should not have tag 117, but 5117. When hitting a trade, the New Order – Single message is used where the order type is set to previously quoted (40=D). Price, amount and QuoteID is also returned. If the trade is good, an Execution Report is sent back with ExecType equal to Fill (150=2) and OrderStatus equal to Filled (39=2).

Another provider is using version 4.4 and what I refer to as the Quote Request methodology. After the initial logon a number of Quote Request messages are sent. These indicate which currency pairs and for what amounts we would like to receive quotes. Multiple bands are supported, leading to a steady stream of quote messages from the provider. Whenever we wish to trade on a quote, the Quote Response message is used, with the quote response type set to hit/lift (694=1). If the trade is good, the provider responds in an Execution Report with ExecType equal to Trade (150=F) and OrderStatus equal to Filled (39=2).

The amount of data being sent over the wire in the two cases is almost the same. The Quote message is slightly smaller than the Market Data message, but only by a few tags.

As can be seen, there is nothing to stop companies from utilizing the FIX protocol for FX trading. In fact, since FIX is becoming – if not already so – the de-facto standard for all electronic trading, there is much to be gained from moving to FIX, in terms of faster time-to-market and reduced operating costs.


Karsten Strøbæk is Team Lead and Lead Developer for the STP Interfaces group at Saxo Bank. The group’s responsibilities include the development, maintenance and troubleshooting of all trading connections and order routing connections out of the bank. Instead of using a commercial offering, the bank has developed its own FIX library. Karsten writes a blog on FIX and high frequency trading in general (

Saxo Bank is a global investment bank, based in Copenhagen, Denmark. Founded in 1992, Saxo Bank has rapidly become a well-known specialist in online trading in the international capital markets. A fully licensed, EU-regulated financial institution, the bank is recognized for its award-winning information, execution and risk management platform, the SaxoTrader, which consistently earns top prizes in significant industry survey, poll and awards events.

The Bank’s business model, which emphasizes facilitation, decidedly promotes partnerships with liquidity providers and distributors alike.  As a result, Saxo Bank has emerged as a leading White Label Partner, able to accelerate licensed financial institutions’ time-to-market as providers of online trading for their client base.  Saxo Bank serves clients in more than 120 countries, supports sales and service in 35 languages and offers its platform in 20 languages.

Thursday, March 30, 2006

FAST Reference Code for C++

I have been asked to make the reference code for C++ for the FAST protocol. The reference code – and all other information about the FAST protocol – can be found at

This work will most likely be done sometime this summer.

Tuesday, March 21, 2006

Lock free caching

All developers writing multi-threaded, real-time mission critical systems has encountered the problem of reading from a cache and updating it at the same time. This is something that should be avoided as it will generally lead to exceptions, but how do you get your cache updated without introducing concurrency problems in your code?

The following will address this problem and give a (well known) design to solve it. The small code samples are in C++, but any language will do. I will also show a way to check for updates without reading all values (we here assume, that the values are stored in a database), hence keeping the load on the database to a minimum.

Please add the usual disclaimers about not being production coded, that you have to add error checking, &c. I must also apologize for the layout; when I have more time, I will create a template/style to accommodate code snippets.

Assume that you have a map of some sort.

typedef std::map<key,value> CMapCollection;
CMapCollection m_colMyMap[2];
long m_lMapCacheIdx;

As can be seen we declare an array of two maps. This is to hold the active and passive map. The m_lMapCacheIdx will hold the index of the active cache, so when we wish to look up a key in the map, we do the following:

CMapCollection::iterator it = m_colMyMap[m_lMapCacheIdx].find(theKey);

We can now do our lookups in the active map, and update the passive map without introducing concurrency problems in the form of locks.

My UpdateCache() function will connect to a database, retrieve a recordset and update the cache accordingly.

bool CMyClass::UpdateCache( void )
bool bRet = m_pConnection->AccessDatabase(...);

// Find the passive cache
const long lPassiveIdx = m_lMapCacheIdx ? 0 : 1;

// Clear the passive cache

// Loop around the recordset returned by the call to AccessDatabase(...) and
// update the, just cleared, passive cache.

// Switch active cache
if ( bUpdateOk )
m_lMapCacheIdx = lPassiveIdx;

return true;

We periodically want to check for new values, so we have a worker thread timing out with a given interval, here 1000 ms.

unsigned long CMyClass::_WorkerThread ( CMyClass *pThis )
if (hRes != S_OK) ErrorExit( _T("CoInitializeEx failed"), hRes );

unsigned long ulRes = pThis->WorkerThread();

CoUninitialize(); // Closes the COM library on the current thread, unloads all DLLs loaded by the thread, frees any other resources that the thread maintains, and forces all RPC connections on the thread to close

return ulRes;

unsigned long CMyClass::WorkerThread()
while ( true )
unsigned long ulEvent = WaitForSingleObject(m_hEvent, 1000); // Timeout every sec
switch (ulEvent)
case WAIT_OBJECT_0: // Something in queue or stop flag set
if (m_bStopFlag)
return ThreadExitCleanUp(0);
return ThreadExitCleanUp(-1);
} // end switch on ulEvent
} // end while true

To ease readability I will reduce my UpdateCache function to the following:

bool CMyClass::UpdateCache( void )
bool bRet = m_pConnection->AccessDatabase(...);
// Do the update stuff
return true;

I've declared my UpdateCahce function as a bool; old habit as we do not yet use the return value; it might as well have been declared as void.

So every second we call our UpdateCache() function. However, we may not want to actually check for updates this frequently.

We will add some more code to UpdateCache. Assume the following:

m_llUpdateTime is declared as __int64. This is the time stamp for the last update.
UtcTimeNow() is a helper function that sets the UTC time (always use UTC when doing anything with time and you want to ensure it is working over different time zones).

bool CMyClass::UpdateCache( void )
__int64 tNow;

if ( (tNow < m_llUpdateTime ¦¦
(tNow - m_ llUpdateTime)/TSW_SECONDS_TO_NANOSEC >= 900))
// Do the update stuff
m_ llUpdateTime = tNow;
return true;

We now only access the database every 15 minutes (900 seconds), but we still do it regardless of whether any rows in the database has been updated or not. If we are reading in large amounts of data this is really a waste of resources.

To accommodate this last requirement, I will use a design pattern of first checking a timestamp in another (small) table and only if this timestamp is more recent than when I last read my data, I will access the data base. The database schema for the TableUpdates table is: TableName (varchar), Timestamp (datetime).

The member variable m_TableUpdate is a pointer to a class maintaining a connection to the database. It also has two helper functions: CheckForTableUpdate() and AcceptTableUpdate(). CheckForTableUpdate() takes a table name as input and returns a bool if the time stamp for that table has been updated since last time we checked. AcceptTableUpdate() sets an internal timer for when the last update was read. This design allows us to use the same class for checking multiple tables for updates.

Update our UpdateCache() function to the following:

bool CMyClass::UpdateCache( void )
LPTSTR TableUpdateTag = _T("Exchanges");
__int64 tNow;

if ( (tNow < m_llUpdateTime ¦¦
(tNow - m_ llUpdateTime)/TSW_SECONDS_TO_NANOSEC >= 900))
bool bIsUpdated = false;
if ( !m_TableUpdate.CheckForTableUpdate(TableUpdateTag, &bIsUpdated) )
bIsUpdated = true; // We force a read if the check failed
// Some error logging

if ( bIsUpdated )
if ( !m_TableUpdate.AcceptTableUpdate(TableUpdateTag) )
// Some more error logging
// Do the update stuff
m_ llUpdateTime = tNow;
return true;

We have now achieved what we wanted, namely to only read the (map)values from the database when they have been updated.

Monday, March 13, 2006

Mark Seemann has a blog

My good friend (and former colleague back when I did consultancy work in the happy dot-com era) Mark has a blog about .NET design and programming. Read it! Mark is a lot better writer that I am, and he knows this stuff. Personally I don't understand half of what he is writing about, but that is more an issue with my lack of .NET knowledge that the quality of the contributions.

We actually meet each other back when both of us studied economics at the University of Copenhagen. If I remember correctly it was a class in game theory, and as usual Mark had to explain the finer details to me.

Sunday, March 12, 2006

The FAST Protocol

The Market Data Optimization Working Group (aka MDOWG) under FLP has recently released the first version of the FAST protocol. And it really seems like FAST is fast. The initial POCs show improvements of up to 70% compared to “native� FIX.

But what is FAST (Fix Adapted for STreaming) and how will it influence the existing FIX session? This blog entry will try to elaborate on these questions. I have looked at the material supplied at the FAST Technical Summit held at the London Stock Exchange back in January.

The FAST protocol consists to two specifications: FAST Field Encoding Specification (FAST CODEC) and FAST Serialization Specification (FAST SERDES). The first has to do with sending fewer tags, and hence less data over the wire, the second has to do with compression of the shorter message before sending it.

There are two approaches to implementing FAST: as an integrated part of the FIX session or as a separate FAST session layer. It is recommended to use the later as this will have a minimal or no impact on the existing applications. An illustration of the message stack could be the following:

Business application
FIX message parsing
FAST Field encoding/decoding
FAST Wire format encoding/decoding

But why FAST at all? We have in the last few years seen dramatically increased market data volumes leading to high band width and processing costs, and there is no doubt that this trend will continue, as the different exchanges add more and more new products to their offerings. It is then easy to see, that if FAST offer up to 70% better utilization of your existing line capacity, you may well be able to stick with the T1 line you have, and not invest in a new T3 line.

If we should look at the details of the FAST protocol we could begin by looking at the basic feature set making up the protocol.

First of all it has been designed and optimized for message streams. This does not mean that FAST can not be used for other purposes, e.g. order routing as we shall see later. FAST is content aware. This requires knowledge about the different messages structures, which on one hand leads to less flexibility, but on the other hand to a much more efficient protocol. FAST uses a byte-oriented binary representation and variable-length fields. It has been defined that each message must contain at least one or more fields, hence no fields, no message, nothing to send. The last feature is the use of a presence map which enables efficient use of default values.

A basic FAST Implementation consists of the following:

  • A simple configuration in which the Sender encodes the data and the Receiver decodes the data.
  • No additional session management is needed
  • FAST-encoded data is sent directly over the native transport.
  • Templates are sent out-of-band or statically downloaded.
  • Templates are defined in a simple, human-readable format

At this point we have to define a Template. We need to understand how they are defined and used by FAST.

In general, a template is used to specify the structure, data types, and field operations of a message type. It specifies all fields included in a message as well as the sequence of those fields. If required it may also support repeating groups which allow a single message to efficiently convey multiple instructions: bids, asks, trades, etc.

When planning to encode a data feed the user should begin by converting standard message formats to temples, as all message types must be expressed in the proper template format.

When the template is defined, FAST uses it in a number of ways. First of all it provides the required content-awareness as described in the basic feature set. It allows FAST to encode and decode on a field by field basis and it provides critical information for both field encoding and serialization operations. The field encoding instructions given in the template are conveyed to FAST which performs the appropriate field encoding operations. Last data type descriptions are specified in the template which informs the Serializer whether a field is a string, an integer or a decimal value.

Templates can be defined using two different notations known as the Compact notation and the XML notation. I will only look at the former.

The structure of the Compact Notation is

((tag number)(data type)(field encoding operator))

Please note, that this is not an extensible solution and has known limitations, which is why the XML format is also proposed. I just prefer the compact notation as it is a straightforward way to define a template.

A summary of the Field Encoding Operators is given below:

! : Default Coding – default value per template
= : Copy Coding – copy prior value
+ : Increment Coding – increment prior value
- : Delta Coding – numeric or string differential
@ : Constant Value Coding – constant value specified in template
* : Implicit Value Coding – implies field values

The Data Type Descriptors is defined as:

s : string
u : unsigned integer
U : Unsigned integer supporting a NULL value
I : Signed integer
i : Signed integer supporting a NULL value
F : Scaled number

An example is in order to see how this works. Let us look at an Order – Single (35=D), just to stress the point that FAST can be used for other purposes than market data. Please note that only the logical result of the field encoding is shown. The serialization will compact the message even further and produce the physical message to be sent.

“[0]� represents a basic field delimiter.
The template on compact form is the following:


We want to place a Limit order to Buy 100 Microsoft (MSFT.OQ) at the price of 24.75. This will result in the following order as FIX. To ease readability we only identify the instrument by tag 55 (Symbol) and not the usual tag 22 (IDSource), tag 48 (SecurityID), and tag 100 (ExDestination):

8=FIX4.2[0]9=108[0]35=D[0]49=SENDER[0]56=TARGET[0]34=7[0]52=20060312-21:53:05[0]11=12345678[0]54=1[0]38=100[0]40=2[0]21=1[0]55=MSFT.OQ[0] [0]44=24.75[0]10=120[0]

If we FAST field encode the above, we will save around 41% before serialization:


The second order we want to place is a Limit to Sell 100 Apple (AAPL.OQ) at 12.55. This will give the following FIX order:


When we FAST field encode the second order we save 73% before serialization:


We can likewise give the template for the corresponding execution report message. On compact form it can be written as:


The accept message - Execution Report (New) - for our order place request for the Microsoft equities is as FIX:

8=FIX.4.2[0]9=204[0]35=8[0]49=TARGET[0]56=SENDER[0]34=6[0]52=20060312-21:53:05[0]11=12345678[0]54=1[0]38=100[0]40=2[0]55=MSFT.OQ[0]44=24.75[0]37=OrderID001[0]17=ExecID1[0]20=0[0]39=0[0]150=0[0]59=0[0]31=0[0]32=0[0]14=0[0]6=0[0]151=100[0]60=20060312-21:53:06[0]58=New order[0]10=033[0]

Again we will FAST field encode the FIX message. This will give us a saving of 39% before serialization

[0]204[0][0]TARGET[0]SENDER[0]6[0]20060312-21:53:05[0]12345678[0]1[0]100[0]2[0]MSFT.OQ[0]24.75[0]OrderID001[0]ExecID1[0]0[0]0[0]0[0]0[0]0[0]0[0]0[0]0[0]100[0]20060312-21:53:06[0]New order[0]033

The accept message of the second order as FIX is:

8=FIX.4.2[0]9=204[0]35=8[0]49=TARGET[0]56=SENDER[0]34=7[0]52=20060312-21:53:15[0]11=12345679[0]54=2[0]38=100[0]40=2[0]55=AAPL.OQ[0]44=12.55[0]37=OrderID002[0]17=ExecID2[0]20=0[0]39=0[0]150=0[0]59=0[0]31=0[0]32=0[0]14=0[0]6=0[0]151=100[0]60=20060312-21:53:18[0]58=New order[0]10=043[0]

As was the case for the actual place order request, it is for the second execution report that we really see the advantage of FAST. If we field encode the second response we save around 78% before serialization.


After this nobody should be in doubt about the strength of the FAST protocol and the enormous potential it holds to optimize the existing FIX session.

Monday, February 13, 2006

FIX for Streaming Quotes: version 4.3 vs. 4.4

When using FIX for trading FX Spot on streaming quotes one must choose between the two latest versions of FIX: 4.3 or 4.4. Version 4.2 does not really support spot trading, as several of the required messages are not defined. I've come across some solutions where the MarketData message was modified to contain e.g. a QuoteID. This does work, but one can not really claim to be compliant with the protocol as custom tags have to be introduced. FIX4.2 is really for order routing and that's it.

Should one then go for FIX4.3 or FIX4.4? Our own B2B FIX server is build around FIX4.3. This is mainly because it seemed to be the most commonly used version at the time development began, and it works quite well, but if I should start today, I would definitely got for FIX4.4 as it is much more elegant for spot trading. I will try to quantify that statement below.

Both versions support the QuoteRequest and Quote messages, hence the subscription to and receiving of the quotes are the same. The difference lay in the way you trade on these quotes.

For FIX4.3 you use version 4.2 syntax using the Order - Single message (35=D), with the OrdType set to Previously quoted (40=D).

An example of an Order - Single message if you wanted to Buy 5.000.000 EUR/USD @ 1.3364 could be (tag 117 is the QuoteID):

[Standard Header]
11=1702843[0]64=20060209[0]21=1[0]55=EUR/USD[0]460=4[0]54=1[0] 60=20060207-10:00:29[0]38=5000000[0]40=D[0]44=1.3363[0]117=STE-EURUSD-2006-2-7:10.0.28:45-5000000
[Standard Trailer]

[0] used to represent the basic field delimiter.

Note, that I have included tag 460 to identify the Product as Currency (4). This is pretty basic stuff really, if you come from the cash equities or fixed income world.

If you were to trade FX on streaming quotes using FIX4.4 you would use the QuoteResponse message (35=AJ) - you could in principle also still use the FIX4.2 syntax described above. The advantage of using this syntax is, that apart from not being confused whether you were placing an order or making a trade is, that you are able to unsubscribe to a quote stream. The tag QuoteRespType (694) should be set to "1" if you want to Hit or Lift the quote and to "6" if you want to Pass on it, that is stop the stream.

A QuoteResponse message could look like the following:

[Standard Header]
[Standard Trailer]

As can be seen, then the QuoteResponse message is also slightly more compact that the Order - Single message.

Tuesday, January 24, 2006


One of the problems with the FIX protocol is the handling of missing messages. If you receive a message with a sequence number higher than expected, the protocol dictates that you should send a ResendRequest, requesting the messages from the missing message (to infinity). As the messages are to be handled sequentially all process of incoming messages should stop, until the gap has been filled.

This modus operandi works fine for trade and order related messages and executions, but what about quotes? Or market data? Are you really interested in receiving a bunch of old quotes that you can't trade on anyway, because they are ... old?

To overcome this, we have adopted the following best practice for our own B2B Server: we use two FIX sessions, one for the quotes and one for the trades. The benefit of this is that the client does not have to request missing messages on the quote session, but can simply accept the incoming sequence number as being the next in line, even if it is too high according to his or her own book keeping. At the same time, we will not send out old quotes, but simply a SessionReset(GapFill) should we receive a ResendRequest. Obviously ResendRequests should be made on the trade session.

Sunday, January 22, 2006

FIX Primer

I've been asked to write a FIX primer. What versions to use for order routing or FX spot trading, best practices concerning the inner workings of the protocol, number of sessions to use in different scenarios and so on.

As this is not a simple task, I will try to post "chapters" here when I'm done along with general views on high frequency trading and order routing. Stuff from my day to day life others may find useful. I've integrated with more than 20 financial institutions and exchanges. Some have done a really good job, others not quite so. This blog is also about these connections and my experiences.