Advances in neuropsychological assessment have resulted in increase use of computerized evaluation instruments. However, computer assisted assessment needs to be validated by the same methods as more traditional psychological assessment. The computer versions of the Category Test appear to be such an instrument. An advantage of the use of computerized methods includes more rigorous administration, ready calculation of latency scores, calculation indices, and allowing detailed pattern analyses. Disadvantages include the lack of normative data, confusion about which set of data norms to apply, and the demonstrated tendency of some computer Category Tests to yield higher scores than expected on the slide version. This study will summarize normative data (N = 149) on a version of the Computer Category Test (CCT), and compare normal and clinical population findings. Large and significant differences emerged between clinically referred individuals and normal volunteers on the CCT. The results on the Computer Category Test demonstrated similar results when compared to recent Heaton, Grant and Matthew (1991) findings (Mean errors = 39.6 versus 42.6 for normal individuals). Recent data on detecting simulation and malingering has been developed with the CCT. Additional development of census matched normative data is recommended, and the DeFilippis Computer Category Test appears to be most promising at this time. It is recommended the CCT administered with other measures of executive functioning to allow for increased clinical use of the instrument, as well as obtaining convergent validity.