Problem: Scaling reliably is hardWhat you need is a Fault-tolerant store and a fault-tolerant framework.Handle hardware faults transparently and efficientlyHigh-availability - Not dependent on any one componentEven on a big cluster, some things take daysEven simple things are complicated in a failure-rich environmentEvery point is a point where things can fail, have to manage that failureWith many computers, many disks, failures are commonWith 1000 computers x 10 disk, we can have 1 node failure and 10 disk failures per daySome failures are intermittent or difficult to detectComputation must succeed and not run slower in these conditions
Apache Hadoop - a new paradigm Scales to thousands of commodity computers Can effectively use all cores and spindles simultaneously If you buy hardware, you want to maximize use New software stack built on a different foundation Not very mature yet In use by most web 2.0 companies and many Fortune 500
The first is “simple algorithms and lots of data trump complex models”. This comes from an IEEE article written by 3 research directors at Google. The article was titled the “Unreasonable effectiveness of Data” it was reaction to an article called “The Unreasonable Effectives of Mathematics in Natural Science” This paper made the point that simple formulas can explain the complex natural world. The most famous example being E=MC2 in physics. Their paper talked about how economist were jealous since they lacked similar models to neatly explain human behavior. But they found that in the area of Natural Language Processing an area notoriously complex that has been studied for years with many AI attempts at addressing this. They found that relatively simple approaches on massive data produced stunning results. They cited an example of scene completion. An algorithm is used to eliminate something in a picture a car for instance and based on a corpus of thousands of pictures fill in the the missing background. Well this algorithm did rather poorly until they increased the corpus to millions of photos and with this amount of data the same algorithm performed extremely well. While not a direct example from financial services I think it’s a great analogy. After all aren’t you looking for an approach that can fill in the missing pieces of a picture or pattern.