Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Python and MongoDB as a Market Data Platform by James Blackburn
1. Python and MongoDB as a Market Data Platform
Scalable storage of time series data
2014
2. Opinions expressed are those of the author and may not be shared by all personnel of Man Group plc
(‘Man’). These opinions are subject to change without notice, and are for information purposes only and do not
constitute an offer or invitation to make an investment in any financial instrument or in any product to which any
member of Man’s group of companies provides investment advisory or any other services. Any forward-looking
statements speak only as of the date on which they are made and are subject to risks and uncertainties that may
cause actual results to differ materially from those contained in the statements. Unless stated otherwise this
information is communicated by Man Investments Limited and AHL Partners LLP which are both authorised and
regulated in the UK by the Financial Conduct Authority.
2
Legalese…
4. Financial data comes in different sizes…
• ~1MB 1x a day price data
• ~1GB x 1000s 9,000 x 9,000 data matrices
• ~40GB 1-minute data
• ~30TB Tick data
• > even larger data sets (options, …)
… and different shapes
• Time series of prices
• Event data
• News data
• What’s next?
4
Overview – Data shapes
5. Quant researchers
• Interactive work – latency sensitive
• Batch jobs run on a cluster – maximize throughput
• Historical data
• New data
• ... want control of storing their own data
Trading system
• Auditable – SVN for data
• Stable
• Performant
5
Overview – Data consumers
6. 6
The Research Problem – Scale
lib.read(‘Equity Prices')
Out[4]:
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 9605 entries, 1983-01-31 21:30:00 to 2014-02-14 21:30:00
Columns: 8103 entries, AST10000 to AST9997
dtypes: float64(8631)
Equity Prices: 77M float64s
593MB of data = 4,744Mbits!
600 MB
7. Many different existing data stores
• Relational databases
• Tick databases
• Flat files
• HDF5 files
• Caches
7
Overview – Databases
8. Many different existing data stores
• Relational databases
• Tick databases
• Flat files
• HDF5 files
• Caches
8
Can we build one system to rule them all?
Overview – Databases
9. Goals
• 10 years of 1 minute data in <1s
• 200 instruments x all history x once a day data <1s
• Single data store for all data types
• 1x day data Tick Data
• Data versioning + Audit
Requirements
• Fast – most data in-memory
• Complete – all data in single location
• Scalable – unbounded in size and number of clients
• Agile – rapid iterative development
9
Project Goals
11. Impedance mismatch between Python/Pandas/Numpy and Existing Databases
- Machine cluster operating on data blocks
Vs
- Database doing the analytical work
MongoDB:
- Developer productivity
- Document Python Dictionary
- Fast out the box
- Low latency
- High throughput
- Predictable performance
- Sharding / Replication for growth and scale out
- Free
- Great support
- Most widely used NoSQL DB
11
Implementation – Choosing MongoDB
14. Mongoose key-value store
14
Implementation - MongooseAPI
from ahl.mongo import Mongoose
m = Mongoose('research') # Connect to the data store
m.list_libraries() # What data libraries are available
library = m[‘jbloggs.EOD’] # Get a Library
library.list_symbols() # List symbols
library.write(‘SYMBOL’, <TS or other data>) # Write
library.read(‘SYMBOL’, version=…) # Read, with an optional version
library.snapshot('snapshot-name') # Create a named snapshot of the library
Library.list_snapshots()
19. _CHUNK_SIZE = 15 * 1024 * 1024 # 15MB
class PickleStore(object):
def write(collection, version, symbol, item):
# Try to pickle it. This is best effort
pickled = lz4.compressHC(cPickle.dumps(item))
for i in xrange(len(pickled) / _CHUNK_SIZE + 1):
segment = {'data': Binary(pickled[i * _CHUNK_SIZE : (i + 1) * _CHUNK_SIZE])}
segment['segment'] = i
sha = checksum(symbol, segment)
collection.update({'symbol': symbol, 'sha': sha},
{'$set': segment,
'$addToSet': {'parent': version['_id']}},
upsert=True)
19
Implementation – Arbitrary Data
20. _CHUNK_SIZE = 15 * 1024 * 1024 # 15MB
class PickleStore(object):
def write(collection, version, symbol, item):
# Try to pickle it. This is best effort
pickled = lz4.compressHC(cPickle.dumps(item))
for i in xrange(len(pickled) / _CHUNK_SIZE + 1):
segment = {'data': Binary(pickled[i * _CHUNK_SIZE : (i + 1) * _CHUNK_SIZE])}
segment['segment'] = i
sha = checksum(symbol, segment)
collection.update({'symbol': symbol, 'sha': sha},
{'$set': segment,
'$addToSet': {'parent': version['_id']}},
upsert=True)
20
Implementation – Arbitrary Data
21. _CHUNK_SIZE = 15 * 1024 * 1024 # 15MB
class PickleStore(object):
def write(collection, version, symbol, item):
# Try to pickle it. This is best effort
pickled = lz4.compressHC(cPickle.dumps(item))
for i in xrange(len(pickled) / _CHUNK_SIZE + 1):
segment = {'data': Binary(pickled[i * _CHUNK_SIZE : (i + 1) * _CHUNK_SIZE])}
segment['segment'] = i
sha = checksum(symbol, segment)
collection.update({'symbol': symbol, 'sha': sha},
{'$set': segment,
'$addToSet': {'parent': version['_id']}},
upsert=True)
21
Implementation – Arbitrary Data
22. class PickleStore(object):
def read(self, collection, version, symbol):
data = ''.join([x['data'] for x in collection.find({'symbol': symbol,
'parent': version['_id']},
sort=[('segment', pymongo.ASCENDING)])])
return cPickle.loads(lz4.decompress(data))
22
Implementation – Arbitrary Data
30. Random E-Mini S&P contract from 2013
30
Results – System Load
OtherTick Mongo (x2)N Tasks = 32
31. Built a system to store data of any shape and size
- Reduced impedance between Python language and the data store
Low latency:
- 1xDay data: 4ms for 10,000 rows (vs. 2,210ms from SQL)
- OneMinute / Tick data: 1s for 3.5M rows Python (vs. 15s – 40s+ from OtherTick)
- 1s for 15M rows Java
Parallel Access:
- Cluster with 256+ concurrent data access
- Consistent throughput – little load on the Mongo server
Efficient:
- 10-15x reduction in network load
- Negligible decompression cost (lz4: 1.8Gb/s)
31
Conclusions