2. Topics
⢠Auto-Scaling Using Amazon EC2 and Scalr
⢠Nginx and Memcached on EC2, a 400% boost!
⢠NASDAQ exchange re-play on AWS
⢠Persistent Django on Amazon EC2 and EBS
⢠Taking Massive Distributed Computing to the
Common Man - Hadoop on Amazon EC2/S3
9. Scalr overview
⢠By using Scalr, you can create a server farm that uses prebuilt AMIs
for load balancing, web servers, and databases. You also can
customize a generic AMI, which you can use to host your actual
application.
⢠Scalr monitors the health of the entire server farm, ensuring that
instances stay running and that load averages stay below a
configurable threshold. If an instance crashes, another one of the
proper type will be launched and added to the load balancer.
10. Scalr (2)
⢠Scalr is an open source, fully redundant, self-curing, and
self-scaling hosting environment that uses Amazon EC2.
⢠Scalr allows network administrators to create virtual
server farms, using prebuilt components. Scalr uses four
Amazon Machine Instances (AMIs) for load balancing,
databases, application server, and a generic base
image.
⢠Administrators can preconfigure one machine and, when
the load warrants, bring online additional machines with
the same image, to handle the increased requests.
13. Originally developed by Igor Sysoev for rambler.ru (second largest
Russian web-site), it is a high-performance HTTP server / reverse
proxy known for its stability, performance, and ease of use. The great
track record, a lot of great modules, and an active development
community have rightfully earned it a steady uptick of users
14. memcached is a high-performance, distributed memory object
caching system, generic in nature, but intended for use in
speeding up dynamic web applications by alleviating database
load.
âMemcached, the darling of every web-developer, is
capable of turning almost any application into a speed-
demon. Benchmarking one of my own Rails applications
resulted in ~850 req/s on commodity, non-optimized
hardware - more than enough in the case of this
application. However, what if we took Mongrel out of the
equation? Nginx, by default, comes prepackaged with the
Memcached module, which allows us to bypass the
Mongrel (from rubyforge) servers and talk to Memcached
directly. Same hardware, and a quick test later: ~3,550
req/s, or almost a 400% improvement!â
22. Credit:
Thomas Brox Røst,
Visiting researcher, Decision Systems Group, Harvard
Persistent Django
on Amazon EC2 and EBS - The easy way
thomas.broxrost.com
tinyurl.com/6b48g9
23. Now that Amazonâs Elastic Block Store (EBS) is publicly available,
running a complete Django installation on Amazon Web Services
(AWS) is easier than ever.
---
EBS provides persistent storage, which means that the Django database
is kept safe even after the Django EC2 instances terminate
24. To setup Django with persistent PostgreSQL database on AWS:
Set up an AWS account
Download and install the Elasticfox Firefox extension
Add your AWS credentials to Firefox
Create a new EC2 security group
By default, EC2 instances are an introverted lot: They prefer keeping to themselves and donât expose any
of their ports to the outside world. We will be running a web application on port 8000 so therefore port
8000 has to be opened. (Normally we would be opening port 80, but since I will only be using the Django
development web server then port 8000 is preferable). SSH access is also essential, so port 22 should be
opened as well. To make this happen we must create a new security group where these ports are opened.
25. Set up a key pair
Launch an EC2 Instance
Connect with your new instance (ssh using putty)
- Install subversion
- Install, initialize and launch PostgreSQL
- Modify PostgreSQL config to avoid username/password problems
- Restart PostgreSQL to enable new security policy
- Set up a database for Django
- Install Django (checkout from SVN)
- Install psycopg2 (for database access from Python)
Set up a Django project
Test the installation
Launch the dev server
Create a Django app
Create and mount an EBS Instance
Mount the filesystem
Move the database to persistent storage (with server stopped)
41. Hadoop
⢠Parallel Computing platform
â Distributed FileSystem (HDFS)
â Parallel Processing model (Map/Reduce)
â Express Computation in any language
â Job execution for Map/Reduce jobs
(scheduling+localization+retries/speculation)
⢠Open-Source
â Most popular Apache project!
â Highly Extensible Java Stack (@ expense of Efficiency)
â Develop/Test on EC2!
⢠Ride the commodity curve:
â Cheap (but reliable) shared nothing storage
â Data Local computing (donât need high speed networks)
â Highly Scalable (@expense of Efficiency)
47. Why HIVE?
⢠Large installed base of SQL users ď
â ie. map-reduce is for ultra-geeks
â much much easier to write sql query
⢠Analytics SQL queries translate really well
to map-reduce
⢠Files as insufficient data management
abstraction
â Tables, Schemas, Partitions, Indices
48.
49. Hive Query Language
⢠Basic SQL
â From clause subquery
â ANSI JOIN (equi-join only)
â Multi-table Insert
â Multi group-by
â Sampling
â Objects traversal
⢠Extensibility
â Pluggable Map-reduce scripts using
TRANSFORM
50. Data Warehousing at Facebook
(Scribe is a server for aggregating log data streamed in real time from a large
number of servers. It is designed to be scalable, extensible without client-side
modification, and robust to failure of the network or any specific machine)
Web Servers Scribe Servers
Filers
Hive on
Hadoop Cluster
Oracle RAC Federated MySQL
51. Hadoop Usage @ Facebook
⢠Data warehouse running Hive
⢠600 machines, 4800 cores
⢠3200 jobs per day
⢠50+ engineers have used Hadoop
⢠Data statistics:
â Total Data: ~2.5PB
â Net Data added/day: ~15TB
⢠6TB of uncompressed source logs
⢠4TB of uncompressed dimension data reloaded daily
â Compression Factor ~5x (gzip, more with bzip)
⢠Usage statistics:
â 3200 jobs/day with 800K tasks(map-reduce tasks)/day
â 55TB of compressed data scanned daily
â 15TB of compressed output data written to hdfs
â 80 MM compute minutes/day
52. Hadoop Job types @ Facebook
⢠Production jobs: load data, compute
statistics, detect spam, etc
⢠Long experiments: machine learning, etc
⢠Small ad-hoc queries: Hive jobs, sampling
⢠GOAL: Provide fast response times for
small jobs and guaranteed service levels
for production jobs
53. Usage patterns in Yahoo
⢠ETL
â Put large data source (eg. Log files) onto the Hadoop File System
â Perform aggregations, transformations, normalizations on the data
â Load into RDBMS / data mart
⢠Reporting and Analytics
â Run canned and ad-hoc queries over large data
â Run analytics and data mining operations on large data
â Produce reports for end-user consumption or loading into data mart
54. Usage patterns in Yahoo
⢠Data Processing Pipelines
â Multi-step pipelines for data processing
â Coordination, scheduling, data collection and publishing of feeds
â SLA carrying, regularly scheduled jobs
⢠Machine Learning & Graph Algorithms
â Traverse large graphs and data sets, building models and classifiers
â Implement machine learning algorithms over massive data sets
⢠General Back end processing
â Implement significant portions of back-end, batch oriented processing on the grid
â General computation framework
â Simplify back-end architecture
55. What is Hadoop Pig
Pig is a platform for analyzing large data sets that consists of a
high-level language for expressing data analysis programs, coupled
with infrastructure for evaluating these programs.
http://www.cloudera.com/hadoop-training-pig-introduction
56.
57.
58. Thanks to the kind sponsorship
to the AWS LONDON USER
GROUP
from
200bytes/transaction
Milk â assuming each transaction is for 1Gallon
Who needs another programming Language (PLSQL ď)
Gotchas later on (about networking trends)
Anyone can rent a computer!!!! (UC Berkeley)
UC Berkeley EC2 example
UC Berkeley EC2 example
Point out that now we know how HDFS works â we can run maps close to data
Point out that now we know how HDFS works â we can run maps close to data
Point out that now we know how HDFS works â we can run maps close to data
Nomenclature: Core switch and Top of Rack
Simple map-reduce is easy â but it can get complicated very quickly.
Multi table inserts and multi group byâs allow us to reduce the number of scans required. Poor manâs alternative to MQO.