Saliency maps provided a biologically plausible model of visual attention based on parallel preattentive features. The goal of past research with saliency maps often is to find regions of interest in a scene under various conditions or top-down effects. Recent publications suggest learning the significance of preattentive feature from visual scanpaths. Our research implements a computational model of saliency maps based on dynamical systems and then proposes a method of recovering feature weights from points of focal attention. Performance of the learning model is evaluated by comparing learnt focal attention to the training data. Finally, suggestions are made for improving the learning system during future research.