A prototype is a general description which depicts what an entire set of exemplars, belonging to a certain category, looks like. We investigate how prototypes, in the form of mathematical averages of a category's exemplar vectors, can be represented, extracted, accessed, and used for learning in an Artificial Neural Network (ANN). From the method by which an ANN classifies exemplars into categories, we conclude that prototype access (the production of an extracted prototype) can be performed using a very simple architecture. We go on to show how the architecture can be used for prototype extraction by simply exploiting how the back-propagation learning rule handles one-to-many mappings. We note that no extensions to the classification training sets are needed as long as they conform to certain restrictions. We then go on to show how the extracted prototypes can be used for the learning of new categories which are compositions of existing categories and we show how this can lead to reduced training sets and ultimately reduced learning times. A number of restrictions are noted which have to be considered in order for this to work. For example, the exemplar representations must be systematic and the categories linearly separable. The results, and other properties of our network, are compared with other architectures which also use some kind of prototype concept. Our conclusion is that prototype extraction and learning using prototypes is possible using a simple ANN architecture. Finally, we relate our system to the symbol grounding problem and point out some directions for future work.