Diese Präsentation wurde erfolgreich gemeldet.
Die SlideShare-Präsentation wird heruntergeladen. ×

Batch Processing vs Stream Processing Difference

Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Nächste SlideShare
Big data architecture
Big data architecture
Wird geladen in …3
×

Hier ansehen

1 von 16 Anzeige

Weitere Verwandte Inhalte

Ähnlich wie Batch Processing vs Stream Processing Difference (20)

Weitere von jeetendra mandal (20)

Anzeige

Aktuellste (20)

Batch Processing vs Stream Processing Difference

  1. 1. Batch Processing •Batch processing processes huge data volumes within a specific time after production. •Batch processing compiles a large volume of data all at once. •Processing data size if finite and specified. •Input graph in batch processing is static. Stream Processing •Stream processing processes continuous data in real-time as it is produced. •In-stream processing is done in intervals as soon as data is produced. •Stream processing data size is unknown and infinite. •In-stream processing, the input graph is dynamic.
  2. 2. Batch Processing vs Stream Processing Batch processing is done on a large data batch, and the latency can be in minutes, days, or hours. It requires the most storage and processing resources to process big data batches. The latency of real-time data processing is in milliseconds and seconds, and it processes the current data packet or several of them. It requires less storage for processing recent or current data pocket sets and has fewer computational requirements. Streaming data analyzes continuous data streams, and the latency is guaranteed in milliseconds. It requires current data packet processing; hence the processing resources must be alert to meet guarantees of real-time processing.
  3. 3. Batch Processing vs Stream Processing
  4. 4. Batch Processing vs Stream Processing In Batch Processing it processes over all or most of the data but In Stream Processing it processes over data on rolling window or most recent record. So Batch Processing handles a large batch of data while Stream processing handles Individual records or micro batches of few records. In the point of performance the latency of batch processing will be in a minutes to hours while the latency of stream processing will be in seconds or milliseconds.
  5. 5. What is Batch Processing? Batch processing refers to the processing of blocks of data that have already been stored over a period of time. For example, processing transactions that have been performed by a financial firm in a week. This data contains millions of records for a day that can be stored as a file or record. The particular file will undergo processing at the end of the day for various analyses that the firm requires and it will be a time taking process. Batch processing is ideal for very large data sets and projects that involve deeper data analysis. The method is not as desirable for projects that involve speed or real-time results. Additionally, many legacy systems only support batch processing.
  6. 6. Batch Processing Use cases? Batch processing is used in a variety of scenarios, from simple data transformations to a more complete ETL pipeline. In the context of big data, batch processing may operate over very large data sets, where the computation takes a significant amount of time. It works well in situations where you don’t need real-time analytics results or when it is more important to process large volumes of data to get detailed insights rather than to get fast analytics results. •Real-time transfers and results are not crucial •Large volumes of data need to be processed •Data is accessed in batches as opposed to in streams •Complex algorithms must have access to the entire batch
  7. 7. Technology Choice for Batch Processing 1.Azure Synapse Analytics: It is an analytics service that binds enterprise data warehousing and Big Data analytics. 2.Azure Data Lake Analytics: It is an on-demand analytics job service that is used to simplify big data 3.HDInsight: It is an open-source analytics service in the cloud that consists of open- source frameworks such as Hadoop, Apache Spark, Apache Kafka, and more. 4.Azure Databricks: It allows us to integrate with open-source libraries and provides the latest version of Apache Spark. 5.Azure Distributed Data Engineering Toolkit: It is used for provisioning on- demand Spark on Docker clusters in Azure.
  8. 8. What is Stream Processing Stream processing is a big data technology that allows us to process data in real-time as they arrive and detect conditions within a small period of time from the point of receiving the data. It allows us to feed data into analytics tools as soon as they get generated and get instant analytics results. Stream processing is ideal for projects that require speed and nimbleness. The method is less relevant for projects with high data volumes or deep data analysis. Stream processing is useful for tasks like fraud detection, social media sentiment analysis, log monitoring, analyzing customer behavior, and more.
  9. 9. Technology Choices for Stream Processing 1.Azure Stream Analytics: It is real-time analytics and event-processing engine designed to analyze and process high volumes of fast streaming data from multiple sources. 2.HDInsight with Storm: Apache Storm is a distributed, fault-tolerant, and open- source computation system which is used to process streams of data in real-time with Apache Hadoop. 3.Apache Spark in Azure Databricks 4.Azure Kafka Stream APIs 5.HDInsight with Spark Streaming: Apache Spark Streaming provides data stream processing on HDInsight Spark clusters.
  10. 10. Batch Processing vs Stream Processing •The batch processing model requires a set of data that is collected over time while the stream processing model requires data to be fed into an analytics tool, often in micro-batches, and in real-time. •The batch Processing model handles a large batch of data while the Stream processing model handles individual records or micro-batches of few records. •In Batch Processing, it processes over all or most of the data but in Stream Processing, it processes over data on a rolling window or most recent record. •From a performance point of view, the latency of the batch processing model will be in minutes to hours while the latency of the stream processing model will be in seconds or milliseconds. •Batch processing is a lengthy process and is meant for large quantities of information that aren’t time-sensitive whereas Stream processing is fast and is meant for information that is needed immediately.
  11. 11. Batch Processing vs Stream processing Batch processing is the processing of transactions in a group or batch. There is no user interaction required once batch processing is running. This differentiates batch processing from transaction processing, which involves processing transactions one by one and requires user intervention.
  12. 12. Batch Processing vs Stream processing Stream processing is the process of analyzing streaming data in real-time. Analysts are able to continuously monitor a stream of data to achieve various goals. Stream processing is a low-latency way to capture information about events while they are in transit, processing the data. A data stream, or event stream, can include almost any type of information: social network or web browsing path data, factory production and other process data, stock or financial transaction details, patient data in a hospital, machine learning system data, IoT (Internet of Things).
  13. 13. THANK YOU Like the Video and Subscribe the Channel

×