This document summarizes some challenges and solutions related to structured streaming in Spark. It discusses issues with joining streaming and batch data due to lack of pushdown predicates. It also covers problems with caching batch dataframes, lack of a JDBC sink in streaming mode initially, issues with checkpoints being inconsistent, and limitations on aggregating aggregated dataframes. Solutions proposed include caching data outside Spark, looking up batch data in map/flatmap, direct database writes, using NFS for checkpoints, and custom aggregations without Spark SQL.