Wondering how to bring services to your clients in real time – and on their preferred device? Need to automate your financial supply chain, including risk and compliance functions, and move to a pay for performance model?
Learn about use cases from within the big data ecosystem, ranging from AML compliance, trade lifecycle, fraud detection and digital transformation, and introduce their risk data aggregation and compliance initiative. Find out how you can best leverage Open Enterprise Hadoop to achieve these goals.
Provides a single consolidated group of generic processors, running and open source data ingestion, persistence and parallel computing stack
Eliminate computational grid licensing costs associated with Grid Computing technology
Eliminate the data caching grid licensing costs associated with IMDGs
Eliminate Relational Database and Data Appliance costs that are associated with Risk and Compliance applications.
Eliminate the costs associated with ETL tools such as Data Stage and Ab Initio.
Provides a unified development environment for analytics
Use open source languages for analytic development such as R as well as supporting SAS and MatLab interfaces.
Provide open source source code management and QA tools.
Create a partnership between quants and IT developers that will allow IT to package analytics for deployment rather than recoding them
Sharply decrease analytic development and test times by providing a calculator framework that provides data as a service
Provides a light weight way for ingesting and storing data
Store and record each risk and compliance groups transformations to the base data providing BCBS 239 compliance
Support multiple views on the same data with Schema on Read capability
Support multiple language access to the data via SQL, PIG, Scala and Java interfaces
We use Cache to keep frequently access reference data for Normalization and Validation Data
Green represents plugin’s to the flow, out of the box and also custom my business.
Orange lines represent Connectivity back to the previous steps (Data Lineage)
Is it possible that data loading and validation could be done in a MapReduce fashion ??
Need to be able to plug and play with the Source Systems, i.e. a new Source System can be added with no impact on existing feeds being consumed. Pluggable infrastructure
L0 is OSS Specific, L1 defines the mandatory / optional columns needed for each entity type (trade, position, security), could be extended dependant on OSS, L2 is the final form, including any additional attributes, in a key value pair structure
Entity, AttributeName, Value, DataType (+ Bi Temporal fields)
High Level architecture, highlighting the components that are needed
DLG: Diagram is too busy, there is a lot going on, so for me it just says busy
DLG: again Diagram is too busy, there is a lot going on, so for me it just says busy
Hundreds of organizations have turned to Hortonworks because Hadoop is ultimately a platform decision. It is typically the first step towards re-architecting your back end data systems.
These organizations that have already been successful with Hadoop have required not just a stable, reliable and complete Hadoop solution, but more importantly a connection with the architects, builders and operators of this open source technology. They saw this in Hortonworks.
And as with any platform decision, it is imperative that Hadoop integrates with the tools and systems that are already resident in your data center. We forge deep relationships with our hundreds of partners so that you can not only ensure integration but also effectively reapply existing systems and skillsets toward your big data challenges.
At Hortonworks, we hold true to these foundational beliefs and have partnered with hundreds of organizations from some of the largest and earliest big data adopters to the most conservative and data rich companies on the planet. We ensure that your Hadoop journey is successful and more companies are turning to Hortonworks today than any other offering on the marketplace. We invite you to join our community.