Hadoop has a master/slave architecture. The master node runs the NameNode, JobTracker, and optionally SecondaryNameNode. The NameNode stores metadata about data locations. DataNodes on slave nodes store the actual data blocks. The JobTracker schedules jobs, assigning tasks to TaskTrackers on slaves which perform the work. The SecondaryNameNode assists the NameNode in the event of failures. MapReduce jobs split files into blocks, map tasks process the blocks in parallel on slaves, and reduce tasks consolidate the results.