4.1: Evaluation Methods
Develop better and more consistent methods for evaluating our products and measuring how well we are satisfying customer needs. Other Information:
Without a good program for evaluating our collection and analysis, we cannot speak with any degree of confidence about how
well we are or are not hitting the mark. Without such a program, we also miss an opportunity to make studied judgments about
our activities and what we can do to improve our collection and analytic posture. We simply cannot continue to rely on anecdotal
evidence, data that can not be replicated, and statistics that are questionable and inconsistent across the community. (See
also Interacting with Collectors chapter.) Implementing Actions: -Work with collection community to develop single evaluation
process that incorporates both collection and analysis. Initiate a Community-wide evaluation process on core issues to be
presented to the DCI as an annual report. Conduct a first-year pilot on two or three issues. Review pilot for lessons learned,
adjust program. Begin full-scale evaluations by FY 2001. -Establish blue ribbon panels—ideally a mix of insiders and outsiders—under
the purview of the ADCI/AP to conduct evaluations of event-driven production. Panel members would vary depending on the issue
involved. Studies would be initiated at the behest of the ADCI/AP, in consultation with the NIPB. In addition to assessing
performance, these evaluations would include les sons learned and recommendations. -Explore electronic audit trails and other
electronic "survey" measures to encourage customer feedback; more accurately deter mine customer usage, productivity, and
timeliness, relevance, and quality of product; and obtain other useful statistics. Investigate possible procedural, legal,
and security issues connected with use of audit trail data. -Learn how web-based businesses measure customer satisfaction
and determine what we might profitably emulate.
Indicator(s):
|