The document proposes two ideas for improving visualization evaluation: an analytic framework for visualization criticism to assess design effectiveness, and cost-effective crowdsourced experimentation techniques to test visualizations with masses of people. It references past studies on evaluating visual encodings and position judgments, and suggests crowdsourcing could provide larger-scale evaluation to supplement traditional lab studies.