With the massive amount of increase in big data technologies today, it is becoming very important to use the right tool for every process. The process can be anything like Data ingestion, Data processing, Data retrieval, Data Storage, etc. Today we are going to focus on one of those popular big data technologies i.e., Apache Spark. As we all know the Spark API like RDD and data frames and these are best suited with big data but some of the things that we do in Spark which makes it slow or inefficient. Somethings that we misconfigured or use in a wrong way, we'll be talking about those in today's session. After that, we are going to unravel the cases where choosing Spark as your Big Data technology would be the best choice you could make. Also, we are going to consider the cases where Spark would be a disastrous choice for your project and should be avoided