Quality Assessment research strongly depends upon subjective experiments to provide calibration data as well as a testing mechanism. After all, the goal of all QA research is to make quality predictions that areinagreementwith subjective opinion of human observers. In order to calibrate QA algorithms and test their performance, a data set of images and videos whose quality has been ranked by human subjects is required. The QA algorithm may be trained on part of this data set, and tested on the rest.At Laboratory for Image and Video Engineering (LIVE) (in collaboration with The Department of Psychology at the University of Texas at Austin), an extensive experiment was conducted to obtain scores from human subjects for a number of images distorted with different distortion types. These images were acquired in support of a research project on generic shape matching and recognition.Related publications:H.R. Sheikh, Z.Wang, L. Cormack and A.C. Bovik, "LIVE Image Quality Assessment Database Release 2",http://live.ece.utexas.edu/research/quality.H.R. Sheikh, M.F. Sabir and A.C. Bovik, "A statistical evaluation of recent full reference image quality assessment algorithms",IEEE Transactions on Image Processing,vol. 15, no. 11, pp. 3440-3451, Nov. 2006.Z. Wang, A.C. Bovik, H.R. Sheikh and E.P. Simoncelli, "Image quality assessment: from error visibility to structural similarity,"IEEE Transactions on Image Processing, vol.13, no.4, pp. 600- 612, April 2004.