Suche senden
Hochladen
Polumesa
•
Als PPT, PDF herunterladen
•
0 gefällt mir
•
310 views
Kostas Tsolakakis
Folgen
Melden
Teilen
Melden
Teilen
1 von 10
Jetzt herunterladen
Empfohlen
πολυμεσα της σοφιας
πολυμεσα της σοφιας
Sofaki ElefCinaa
πολυμέσα πατρίκης
πολυμέσα πατρίκης
Panos Panatha
Πολυμέσα του Pc
Πολυμέσα του Pc
Gatosthe13
χριστοσ 2
χριστοσ 2
Xristos Kampolis
πολυμέσα
πολυμέσα
Marw Manson
Angelmeister
Angelmeister
Aggelos Bessis
β3'
β3'
Alex Hitiris
πολυμεσα ελενη,ελενη
πολυμεσα ελενη,ελενη
Eleni Panatha
Empfohlen
πολυμεσα της σοφιας
πολυμεσα της σοφιας
Sofaki ElefCinaa
πολυμέσα πατρίκης
πολυμέσα πατρίκης
Panos Panatha
Πολυμέσα του Pc
Πολυμέσα του Pc
Gatosthe13
χριστοσ 2
χριστοσ 2
Xristos Kampolis
πολυμέσα
πολυμέσα
Marw Manson
Angelmeister
Angelmeister
Aggelos Bessis
β3'
β3'
Alex Hitiris
πολυμεσα ελενη,ελενη
πολυμεσα ελενη,ελενη
Eleni Panatha
πολυμέσα
πολυμέσα
Dimitra Ragiou
justQQQQQQ
Stamatioy isidoros polimesa qq
Stamatioy isidoros polimesa qq
Isi Stam
πολυμεσα By panagiotaras
πολυμεσα By panagiotaras
TZINIDISPA
πολυMεσα νεκταρια β1!!!9
πολυMεσα νεκταρια β1!!!9
Nekatria7
Τι είναι τα πολυμέσα
Πολυμέσα
Πολυμέσα
BloodHero971
Περιγραφή πολυμέσων.
Πολυμέσα
Πολυμέσα
BloodHero9719
xD
παρουσίαση1
παρουσίαση1
Basilis Stifler
αθανασια
αθανασια
Athanasia Spyropoulou
B3 παρουσίαση2
B3 παρουσίαση2
Giannis Broumas
εικόνα φωτογραφία β2
εικόνα φωτογραφία β2
ALEXANDROS35
β4 γιωργος χυτηρης
β4 γιωργος χυτηρης
Giorgos Hitiris
den xerw
den xerw
Emi Rider Emi
πολυ
πολυ
Fwtini Feh
Friderikaki!
Friderikaki!
liorifrei
ΠΟΛΥΜΕΣΑ (Ε)
ΠΟΛΥΜΕΣΑ (Ε)
giorgito_alex
..
Παρουσίαση πληροφορικής xd
Παρουσίαση πληροφορικής xd
Maria Linardu
Yahoo! is one of the most-visited web sites in the world. It runs one of the largest private cloud infrastructures, one that operates on petabytes of data every day. Being able to store and manage that data well is essential to the efficient functioning of Yahoo's Hadoop clusters. A key component that enables this efficient operation is data compression. With regard to compression algorithms, there is an underlying tension between compression ratio and compression performance. Consequently, Hadoop provides support for several compression algorithms, including gzip, bzip2, Snappy, LZ4 and others. This plethora of options can make it difficult for users to select appropriate codecs for their MapReduce jobs. This paper attempts to provide guidance in that regard. Performance results with Gridmix and with several corpuses of data are presented. The paper also describes enhancements we have made to the bzip2 codec that improve its performance. This will be of particular interest to the increasing number of users operating on "Big Data" who require the best possible ratios. The impact of using the Intel IPP libraries is also investigated; these have the potential to improve performance significantly. Finally, a few proposals for future enhancements to Hadoop in this area are outlined.
Hadoop Summit San Jose 2013: Compression Options in Hadoop - A Tale of Tradeo...
Hadoop Summit San Jose 2013: Compression Options in Hadoop - A Tale of Tradeo...
Sumeet Singh
Since 2006, Hadoop and its ecosystem components have evolved into a platform that Yahoo has begun to trust for running its businesses globally. In this talk, we will take a broad look at some of the top software, hardware, and services considerations that have gone in to make the platform indispensable for nearly 1,000 active developers, including the challenges that come from scale, security and multi-tenancy. We will cover the current technology stack that we have built or assembled, infrastructure elements such as configurations, deployment models, and network, and and what it takes to offer hosted Hadoop services to a large customer base.
Hadoop Summit San Jose 2015: What it Takes to Run Hadoop at Scale Yahoo Persp...
Hadoop Summit San Jose 2015: What it Takes to Run Hadoop at Scale Yahoo Persp...
Sumeet Singh
Hadoop has allowed us to move towards a unified source of truth for all of organization’s data. Managing data location, schema knowledge and evolution, fine-grained business rules based access control, and audit and compliance needs will become critical with increasing scale of operations. In this talk, we will share an approach in tackling the above challenges. We will explain how to register existing HDFS files, provide broader but controlled access to data through a data discovery tool with schema browse and search functionality, and leverage existing Hadoop ecosystem components like Pig, Hive, HBase and Oozie to seamlessly share data across applications. Integration with data movement tools automates the availability of new data. In addition, the approach allows us to open up easy adhoc access to analyze and visualize data through SQL on Hadoop and popular BI tools. As we discuss our approach, we will also highlight how our approach minimizes data duplication, eliminates wasteful data retention, and solves for data provenance, lineage and integrity. URL: http://strataconf.com/big-data-conference-ca-2015/public/schedule/detail/38768
Strata Conference + Hadoop World San Jose 2015: Data Discovery on Hadoop
Strata Conference + Hadoop World San Jose 2015: Data Discovery on Hadoop
Sumeet Singh
hong kong, singapore, finance, recruitment
Aquis Search Presentation
Aquis Search Presentation
timrodrigo
Generic lossless visible watermarking—a
Generic lossless visible watermarking—a
Agnianbu Wrong
The Hadoop project is an integral part of Yahoo!'s cloud infrastructure and is at the heart of many of Yahoo!'s important business processes. Sumeet Singh, the Head of Products for Cloud Services and Hadoop at Yahoo!, explains how Yahoo! leverages Hadoop and Cloud Platforms to process and serve Internet- scale data. Yahoo! operates one of the world's largest private cloud infrastructures. Learn how technologies scale out for building enterprise-wide trusted platforms with tight SLAs. URL: http://www.saptechnologyservice.com/track1.html
SAP Technology Services Conference 2013: Big Data and The Cloud at Yahoo!
SAP Technology Services Conference 2013: Big Data and The Cloud at Yahoo!
Sumeet Singh
Weitere ähnliche Inhalte
Was ist angesagt?
πολυμέσα
πολυμέσα
Dimitra Ragiou
justQQQQQQ
Stamatioy isidoros polimesa qq
Stamatioy isidoros polimesa qq
Isi Stam
πολυμεσα By panagiotaras
πολυμεσα By panagiotaras
TZINIDISPA
πολυMεσα νεκταρια β1!!!9
πολυMεσα νεκταρια β1!!!9
Nekatria7
Τι είναι τα πολυμέσα
Πολυμέσα
Πολυμέσα
BloodHero971
Περιγραφή πολυμέσων.
Πολυμέσα
Πολυμέσα
BloodHero9719
xD
παρουσίαση1
παρουσίαση1
Basilis Stifler
αθανασια
αθανασια
Athanasia Spyropoulou
B3 παρουσίαση2
B3 παρουσίαση2
Giannis Broumas
εικόνα φωτογραφία β2
εικόνα φωτογραφία β2
ALEXANDROS35
β4 γιωργος χυτηρης
β4 γιωργος χυτηρης
Giorgos Hitiris
den xerw
den xerw
Emi Rider Emi
πολυ
πολυ
Fwtini Feh
Friderikaki!
Friderikaki!
liorifrei
ΠΟΛΥΜΕΣΑ (Ε)
ΠΟΛΥΜΕΣΑ (Ε)
giorgito_alex
..
Παρουσίαση πληροφορικής xd
Παρουσίαση πληροφορικής xd
Maria Linardu
Was ist angesagt?
(16)
πολυμέσα
πολυμέσα
Stamatioy isidoros polimesa qq
Stamatioy isidoros polimesa qq
πολυμεσα By panagiotaras
πολυμεσα By panagiotaras
πολυMεσα νεκταρια β1!!!9
πολυMεσα νεκταρια β1!!!9
Πολυμέσα
Πολυμέσα
Πολυμέσα
Πολυμέσα
παρουσίαση1
παρουσίαση1
αθανασια
αθανασια
B3 παρουσίαση2
B3 παρουσίαση2
εικόνα φωτογραφία β2
εικόνα φωτογραφία β2
β4 γιωργος χυτηρης
β4 γιωργος χυτηρης
den xerw
den xerw
πολυ
πολυ
Friderikaki!
Friderikaki!
ΠΟΛΥΜΕΣΑ (Ε)
ΠΟΛΥΜΕΣΑ (Ε)
Παρουσίαση πληροφορικής xd
Παρουσίαση πληροφορικής xd
Andere mochten auch
Yahoo! is one of the most-visited web sites in the world. It runs one of the largest private cloud infrastructures, one that operates on petabytes of data every day. Being able to store and manage that data well is essential to the efficient functioning of Yahoo's Hadoop clusters. A key component that enables this efficient operation is data compression. With regard to compression algorithms, there is an underlying tension between compression ratio and compression performance. Consequently, Hadoop provides support for several compression algorithms, including gzip, bzip2, Snappy, LZ4 and others. This plethora of options can make it difficult for users to select appropriate codecs for their MapReduce jobs. This paper attempts to provide guidance in that regard. Performance results with Gridmix and with several corpuses of data are presented. The paper also describes enhancements we have made to the bzip2 codec that improve its performance. This will be of particular interest to the increasing number of users operating on "Big Data" who require the best possible ratios. The impact of using the Intel IPP libraries is also investigated; these have the potential to improve performance significantly. Finally, a few proposals for future enhancements to Hadoop in this area are outlined.
Hadoop Summit San Jose 2013: Compression Options in Hadoop - A Tale of Tradeo...
Hadoop Summit San Jose 2013: Compression Options in Hadoop - A Tale of Tradeo...
Sumeet Singh
Since 2006, Hadoop and its ecosystem components have evolved into a platform that Yahoo has begun to trust for running its businesses globally. In this talk, we will take a broad look at some of the top software, hardware, and services considerations that have gone in to make the platform indispensable for nearly 1,000 active developers, including the challenges that come from scale, security and multi-tenancy. We will cover the current technology stack that we have built or assembled, infrastructure elements such as configurations, deployment models, and network, and and what it takes to offer hosted Hadoop services to a large customer base.
Hadoop Summit San Jose 2015: What it Takes to Run Hadoop at Scale Yahoo Persp...
Hadoop Summit San Jose 2015: What it Takes to Run Hadoop at Scale Yahoo Persp...
Sumeet Singh
Hadoop has allowed us to move towards a unified source of truth for all of organization’s data. Managing data location, schema knowledge and evolution, fine-grained business rules based access control, and audit and compliance needs will become critical with increasing scale of operations. In this talk, we will share an approach in tackling the above challenges. We will explain how to register existing HDFS files, provide broader but controlled access to data through a data discovery tool with schema browse and search functionality, and leverage existing Hadoop ecosystem components like Pig, Hive, HBase and Oozie to seamlessly share data across applications. Integration with data movement tools automates the availability of new data. In addition, the approach allows us to open up easy adhoc access to analyze and visualize data through SQL on Hadoop and popular BI tools. As we discuss our approach, we will also highlight how our approach minimizes data duplication, eliminates wasteful data retention, and solves for data provenance, lineage and integrity. URL: http://strataconf.com/big-data-conference-ca-2015/public/schedule/detail/38768
Strata Conference + Hadoop World San Jose 2015: Data Discovery on Hadoop
Strata Conference + Hadoop World San Jose 2015: Data Discovery on Hadoop
Sumeet Singh
hong kong, singapore, finance, recruitment
Aquis Search Presentation
Aquis Search Presentation
timrodrigo
Generic lossless visible watermarking—a
Generic lossless visible watermarking—a
Agnianbu Wrong
The Hadoop project is an integral part of Yahoo!'s cloud infrastructure and is at the heart of many of Yahoo!'s important business processes. Sumeet Singh, the Head of Products for Cloud Services and Hadoop at Yahoo!, explains how Yahoo! leverages Hadoop and Cloud Platforms to process and serve Internet- scale data. Yahoo! operates one of the world's largest private cloud infrastructures. Learn how technologies scale out for building enterprise-wide trusted platforms with tight SLAs. URL: http://www.saptechnologyservice.com/track1.html
SAP Technology Services Conference 2013: Big Data and The Cloud at Yahoo!
SAP Technology Services Conference 2013: Big Data and The Cloud at Yahoo!
Sumeet Singh
In the last eight years, the Hadoop grid infrastructure has allowed us to move towards a unified source of truth for all data at Yahoo that now accounts for over 450 petabytes of raw HDFS and 1.1 billion data files. Managing data location, schema knowledge and evolution, fine-grained business rules based access control, and audit and compliance needs have become critical with the increasing scale of operations. In this talk, we will share our approach in tackling the above challenges with Apache HCatalog, a table and storage management layer for Hadoop. We will explain how to register existing HDFS files into HCatalog, provide broader but controlled access to data through a data discovery tool, and leverage existing Hadoop ecosystem components like Pig, Hive, HBase and Oozie to seamlessly share data across applications. Integration with data movement tools automates the availability of new data into HCatalog. In addition, the approach allows ever improving Hive performance to open up easy adhoc access to analyze and visualize data through SQL on Hadoop and popular BI tools. As we discuss our approach, we will also highlight along how our approach minimizes data duplication, eliminates wasteful data retention, and solves for data provenance, lineage and integrity.
Hadoop Summit San Jose 2014: Data Discovery on Hadoop
Hadoop Summit San Jose 2014: Data Discovery on Hadoop
Sumeet Singh
As organizations begin to make use of large data sets, approaches to understand and manage true costs of big data will become an important facet with increasing scale of operations. Whether an on-premise or cloud-based platform is used for storing, processing and analyzing data, our approach explains how to calculate the total cost of ownership (TCO), develop a deeper understanding of compute and storage resources, and run the big data operations with its own P&L, full transparency in costs, and with metering and billing provisions. While our approach is generic, we will illustrate the methodology with three primary deployments in the Apache Hadoop ecosystem, namely MapReduce and HDFS, HBase, and Storm due to the significance of capital investments with increasing scale in data nodes, region servers, and supervisor nodes respectively. As we discuss our approach, we will share insights gathered from the exercise conducted on one of the largest data infrastructures in the world. We will illustrate how to organize cluster resources, compile data required and typical sources, develop TCO models tailored for individual situations, derive unit costs of usage, measure resources consumed, optimize for higher utilization and ROI, and benchmark the cost.
Hadoop Summit San Jose 2014: Costing Your Big Data Operations
Hadoop Summit San Jose 2014: Costing Your Big Data Operations
Sumeet Singh
Cloud-based architectures of Hadoop have made it attractive for public cloud service providers to offer hosted Hadoop services and charge customers on a pay-for-what-you-use basis. For enterprises that have already adopted Hadoop, the data infrastructure has long been seen as a cost element in their budgets. As a result, enterprises thinking of adopting Hadoop are increasingly debating between on-premise and cloud-based models for their data processing needs. We lay out a set of criteria and methodical approaches to help enterprises that have not yet adopted Hadoop evaluate their options, and discuss the pros and cons of both models. For enterprises that have already made significant investments or have plans to build a Hadoop-based infrastructure, we present an approach to manage Hadoop as a Service with a P&L, transparency in costs, and metering & billing provisions. As we discuss these approaches, we will share insights gathered from the exercise conducted on one of the largest Hadoop footprints in the world. We will illustrate how to organize cluster resources, compile data required and typical sources, develop TCO models tailored for individual situations, derive unit costs for usage, measure the resource usage for services, optimize for higher utilization, and benchmark costs. URL: http://strataconf.com/stratany2013/public/schedule/detail/30824
Strata Conference + Hadoop World NY 2013: Running On-premise Hadoop as a Busi...
Strata Conference + Hadoop World NY 2013: Running On-premise Hadoop as a Busi...
Sumeet Singh
With a long history of open innovation with Hadoop, Yahoo continues to invest in and expand the platform capabilities by pushing the boundaries of what the platform can accomplish for the entire organization. In this talk, Sumeet Singh will present some of the recent innovations, open source contributions, and where things are headed when it comes to Hadoop at Yahoo.
Keynote Hadoop Summit Dublin 2016: Hadoop Platform Innovations - Pushing The ...
Keynote Hadoop Summit Dublin 2016: Hadoop Platform Innovations - Pushing The ...
Sumeet Singh
Yahoo! Hadoop grid makes use of a managed service to get the data pulled into the clusters. However, when it comes to getting the data-out of the clusters, the choices are limited to proxies such as HDFSProxy and HTTPProxy. With the introduction of HCatalog services, customers of the grid now have their data represented in a central metadata repository. HCatalog abstracts out file locations and underlying storage format of data for the users, along with several other advantages such as sharing of data among MapReduce, Pig, and Hive. In this talk, we will focus on how the ODBC/JDBC interface of HiveServer2 accomplished the use case of getting data out of the clusters when HCatalog is in use and users no longer want to worry about the files, partitions and their location. We will also demo the data out capabilities, and go through other nice properties of the data out feature. Presenter(s): Sumeet Singh, Senior Director, Product Management, Yahoo! Chris Drome, Technical Yahoo!
HUG Meetup 2013: HCatalog / Hive Data Out
HUG Meetup 2013: HCatalog / Hive Data Out
Sumeet Singh
In this talk, we look at YARN scheduler choices available today for Apache Hadoop 2 and discuss their pros and cons. We dive deeper into Capacity Scheduler by providing a comprehensive overview of its various settings with examples from real large-scale Hadoop clusters to promoter a broader understanding of schedulers’ current state and best practices in place today when it comes to queue nomenclature, planning, allocations, and ongoing management. We present detailed cluster, queue, and job behaviors from several different capacity management philosophies. We then propose practical solutions without any change to the scheduler or core Hadoop that allows managing queue creations and capacity allocations while optimizing for cluster utilization and maintaining SLA guarantees. A unified queue nomenclature, admission and capacity re-allocation policies across BUs, applications, and clusters make service automation possible. Transparency in resources consumed allows for defining realistic SLA expectation. Finally, consistent application tagging completes the feedback loop with SLAs observed through application level reporting.
Hadoop Summit San Jose 2015: Towards SLA-based Scheduling on YARN Clusters
Hadoop Summit San Jose 2015: Towards SLA-based Scheduling on YARN Clusters
Sumeet Singh
Since 2006, Hadoop and its ecosystem components have evolved into a platform that Yahoo has begun to trust for running its businesses globally. Hadoop’s scalability, efficiency, built-in reliability, and cost effectiveness have made it an enterprise-wide platform that web-scale cloud operations run on. In this talk, we will take a broad look at some of the top software, hardware, and services considerations that have gone in to make the platform indispensable for nearly 1,000 active developers on a daily basis, including the challenges that come from scale, security and multi-tenancy we have dealt with in the last several years of operating one the largest Hadoop footprints in the world. We will cover the current technology stack Yahoo that has built or assembled, infrastructure elements such as configurations, deployment models, and network, and what it takes to offer hosted Hadoop services to a large customer base at Yahoo. Throughout the talk, we will highlight relevant use cases from Yahoo’s Mobile, Search, Advertising, Personalization, Media, and Communications businesses that may make these considerations more pertinent to your situation.
Hadoop Summit Brussels 2015: Architecting a Scalable Hadoop Platform - Top 10...
Hadoop Summit Brussels 2015: Architecting a Scalable Hadoop Platform - Top 10...
Sumeet Singh
Yahoo! has been using HBase for a long time in isolated instances, most notably for the personalization platform powering its homepage experiences. The introduction of multi-tenancy has lowered the barriers for all Hadoop users to use HBase. We will cover traditional use cases for HBase at Yahoo!, and new use cases as a result in content management, advertising, log processing, analytics and reporting, recommendation graphs, and dimension data stores. We will then talk about the deployment strategy and enhancements made that facilitate multi-tenancy. Region Server groups provide a coarse level of isolation among tenants by designating a subset of region servers to serve designated tables, and Namespaces for logical grouping of resources (region servers, tables) and privileges (quota, ACLs). We'll also share our experiences in operating HBase with security enabled and contributions made in this area, and results from performance runs conducted to validate customer expectations in a multi-tenant environment. URL: http://www.cloudera.com/content/cloudera/en/resources/library/hbasecon/hbasecon-2013--multi-tenant-apache-hbase-at-yahoo-video.html
HBaseCon 2013: Multi-tenant Apache HBase at Yahoo!
HBaseCon 2013: Multi-tenant Apache HBase at Yahoo!
Sumeet Singh
Operating multi-tenant clusters requires careful planning of capacity for on-time launch of big data projects and applications within expected budget and with appropriate SLA guarantees. Making such guarantees with a set of standard hardware configurations is key to operate big data platforms as a hosted service for your organization. This talk highlights the tools, techniques and methodology applied on a per-project or user basis across three primary multi-tenant deployments in the Apache Hadoop ecosystem, namely MapReduce/YARN and HDFS, HBase, and Storm due to the significance of capital investments with increasing scale in data nodes, region servers, and supervisor nodes respectively. We will demo the estimation tools developed for these deployments that can be used for capital planning and forecasting, and cluster resource and SLA management, including making latency and throughput guarantees to individual users and projects. As we discuss the tools, we will share considerations that got incorporated to come up with the most appropriate calculation across these three primary deployments. We will discuss the data sources for calculations, resource drivers for different use cases, and how to plan for optimum capacity allocation per project with respect to given standard hardware configurations.
Hadoop Summit Amsterdam 2014: Capacity Planning In Multi-tenant Hadoop Deploy...
Hadoop Summit Amsterdam 2014: Capacity Planning In Multi-tenant Hadoop Deploy...
Sumeet Singh
Over the past year, a lot of progress has been made in advancing the Apache Hadoop platform at Yahoo. We underwent a massive infrastructure consolidation to lower the platform TCO. CaffeOnSpark was open-sourced for distributed deep learning on existing infrastructure with a combination of CPU and GPU-based computing. Traditional compute on MapReduce continues to shift to Apache Tez and Apache Spark for lower processing time. Our internal security, multi-tenancy, and scale changes to Apache Storm got pushed to the community in Storm 0.10. Omid was open-sourced for managing transactions reliably on Apache HBase. Multi-tenancy with region groups, splittable META, ZooKeeper-less assignment manager, favored nodes with HDFS block placement, and support for humongous tables have taken Apache HBase scale to new heights. Dependency management in Apache Oozie for combinatorial, conditional, and optional processing gives increased flexibility to our data pipelines teams in maintaining SLAs. Focus on ease of use and onboarding improvements have brought in a whole new class of use cases and users to the platform. In this talk, we will provide a comprehensive overview of the platform technology stack, recent developments, metrics, and share thoughts on where things are headed when it comes to big data at Yahoo.
Hadoop Summit Dublin 2016: Hadoop Platform at Yahoo - A Year in Review
Hadoop Summit Dublin 2016: Hadoop Platform at Yahoo - A Year in Review
Sumeet Singh
Building a real-time monitoring service that handles millions of custom events per second while satisfying complex rules, varied throughput requirements, and numerous dimensions simultaneously is a complex endeavor. Sumeet Singh and Mridul Jain explain how Yahoo approached these challenges with Apache Storm Trident, Kafka, HBase, and OpenTSDB and discuss the lessons learned along the way. Sumeet and Mridul explain scaling patterns backed by real scenarios and data to help attendees develop their own architectures and strategies for dealing with the scale challenges that come with real-time big data systems. They also explore the tradeoffs made in catering to a diverse set of daily users and the associated usability challenges that motivated Yahoo to build a self-serve, easy-to-use platform that requires minimal programming experience. Sumeet and Mridul then discuss event-level tracking for debugging and troubleshooting problems that our users may encounter at this scale. Over the course of their talk, they also address building infrastructure and operational intelligence with anomaly detection, alert correlation, and trend analysis based on the monitoring platform.
Strata Conference + Hadoop World NY 2016: Lessons learned building a scalable...
Strata Conference + Hadoop World NY 2016: Lessons learned building a scalable...
Sumeet Singh
0. p.a.u
0. p.a.u
Lluis Ignasi Mañes
Palestra dada na SACOMP da UFPel em 2015, Pelotas, RS.
Palestra Sacomp 2015
Palestra Sacomp 2015
Luiz Nörnberg
O poder da casa é seu.
Casa colaborativa
Casa colaborativa
Thiago Raydan
Andere mochten auch
(20)
Hadoop Summit San Jose 2013: Compression Options in Hadoop - A Tale of Tradeo...
Hadoop Summit San Jose 2013: Compression Options in Hadoop - A Tale of Tradeo...
Hadoop Summit San Jose 2015: What it Takes to Run Hadoop at Scale Yahoo Persp...
Hadoop Summit San Jose 2015: What it Takes to Run Hadoop at Scale Yahoo Persp...
Strata Conference + Hadoop World San Jose 2015: Data Discovery on Hadoop
Strata Conference + Hadoop World San Jose 2015: Data Discovery on Hadoop
Aquis Search Presentation
Aquis Search Presentation
Generic lossless visible watermarking—a
Generic lossless visible watermarking—a
SAP Technology Services Conference 2013: Big Data and The Cloud at Yahoo!
SAP Technology Services Conference 2013: Big Data and The Cloud at Yahoo!
Hadoop Summit San Jose 2014: Data Discovery on Hadoop
Hadoop Summit San Jose 2014: Data Discovery on Hadoop
Hadoop Summit San Jose 2014: Costing Your Big Data Operations
Hadoop Summit San Jose 2014: Costing Your Big Data Operations
Strata Conference + Hadoop World NY 2013: Running On-premise Hadoop as a Busi...
Strata Conference + Hadoop World NY 2013: Running On-premise Hadoop as a Busi...
Keynote Hadoop Summit Dublin 2016: Hadoop Platform Innovations - Pushing The ...
Keynote Hadoop Summit Dublin 2016: Hadoop Platform Innovations - Pushing The ...
HUG Meetup 2013: HCatalog / Hive Data Out
HUG Meetup 2013: HCatalog / Hive Data Out
Hadoop Summit San Jose 2015: Towards SLA-based Scheduling on YARN Clusters
Hadoop Summit San Jose 2015: Towards SLA-based Scheduling on YARN Clusters
Hadoop Summit Brussels 2015: Architecting a Scalable Hadoop Platform - Top 10...
Hadoop Summit Brussels 2015: Architecting a Scalable Hadoop Platform - Top 10...
HBaseCon 2013: Multi-tenant Apache HBase at Yahoo!
HBaseCon 2013: Multi-tenant Apache HBase at Yahoo!
Hadoop Summit Amsterdam 2014: Capacity Planning In Multi-tenant Hadoop Deploy...
Hadoop Summit Amsterdam 2014: Capacity Planning In Multi-tenant Hadoop Deploy...
Hadoop Summit Dublin 2016: Hadoop Platform at Yahoo - A Year in Review
Hadoop Summit Dublin 2016: Hadoop Platform at Yahoo - A Year in Review
Strata Conference + Hadoop World NY 2016: Lessons learned building a scalable...
Strata Conference + Hadoop World NY 2016: Lessons learned building a scalable...
0. p.a.u
0. p.a.u
Palestra Sacomp 2015
Palestra Sacomp 2015
Casa colaborativa
Casa colaborativa
Ähnlich wie Polumesa
πολυμεσα ελενη,ελενη
πολυμεσα ελενη,ελενη
Eleni Panatha
β3'
β3'
Alex Hitiris
β3'
β3'
Alex Hitiris
πολυμεσα By panagiotaras
πολυμεσα By panagiotaras
TZINIDISPA
β 3 πολυμέσα
β 3 πολυμέσα
β 3 πολυμέσα
ionvam
παρουσίαση1 mar=d b2 251
παρουσίαση1 mar=d b2 251
Maria Linardu
OΜΑΔΑ 11 ΠΛΗΡΟΦΟΡΙΚΗ Β' ΓΥΜΝΑΣΙΟΥ - ΕΝΟΤΗΤΑ 1 - ΚΕΦΑΛΑΙΟ 3 - ΣΕΛΙΔΕΣ 113-119.
Κεφάλαιο 3 - Πολυμέσα
Κεφάλαιο 3 - Πολυμέσα
omada11
Iliac sh ckma crew
Iliac sh ckma crew
Iliac Crew
πολυμέσα – εφαρμογές πολυμέσων
πολυμέσα – εφαρμογές πολυμέσων
Basiliki Thodi
Ομάδα 31 Βιβλίο: Β' Γυμνασίου Ενότητα: Πολυμέσα Κεφάλαιο: Κεφ.3 Σελ.113-119
Πολυμέσα
Πολυμέσα
Omada_31
Εικόνα-Ήχος-Βίντεο
πολυμέσα β γυμνασιου
πολυμέσα β γυμνασιου
ΦΕΡΕΝΤΙΝΟΥ ΜΑΡΙΑ
Πολυμεσα Αλεξια Β1
Πολυμεσα Αλεξια Β1
Alexia Angel
Multimedia in education (in Greek)
Polymesika Stoixeia
Polymesika Stoixeia
Stergios
Andesakis dimitris ergasia2_ask_6_2
Andesakis dimitris ergasia2_ask_6_2
dimandres
bit, format, pixel, resolution, size
Digital image
Digital image
pineniki
Ähnlich wie Polumesa
(15)
πολυμεσα ελενη,ελενη
πολυμεσα ελενη,ελενη
β3'
β3'
β3'
β3'
πολυμεσα By panagiotaras
πολυμεσα By panagiotaras
β 3 πολυμέσα
β 3 πολυμέσα
παρουσίαση1 mar=d b2 251
παρουσίαση1 mar=d b2 251
Κεφάλαιο 3 - Πολυμέσα
Κεφάλαιο 3 - Πολυμέσα
Iliac sh ckma crew
Iliac sh ckma crew
πολυμέσα – εφαρμογές πολυμέσων
πολυμέσα – εφαρμογές πολυμέσων
Πολυμέσα
Πολυμέσα
πολυμέσα β γυμνασιου
πολυμέσα β γυμνασιου
Πολυμεσα Αλεξια Β1
Πολυμεσα Αλεξια Β1
Polymesika Stoixeia
Polymesika Stoixeia
Andesakis dimitris ergasia2_ask_6_2
Andesakis dimitris ergasia2_ask_6_2
Digital image
Digital image
Polumesa
1.
Πολυμέσα
2.
3.
4.
5.
6.
7.
8.
9.
10.
Jetzt herunterladen