Intro ] [ 26-Edison ] [ 27-LEAP ] [ 28-Plan Compilation ]

Up: AI in Design ]

Critique: LEAP: A Learning Apprentice for VLSI Design
Mitchell, Mahadevan, Steinberg *

      I found the Discussion section far more interesting than the details of how LEAP captures training examples and forms general rules from them. It was interesting to see the correspondence between features of LEAP and some machine learning systems common today. Take speech recognition for example. Many such systems require a training phase when first used. Then, as the system is used more and more it learns to better recognize a particular speech pattern through continuously identifying training examples. This is done in much the same way LEAP identifies them. While the actual methods by which training examples are used to learn is very different, the degree of similarity between such systems is amazing considering the fairly domain specific nature of LEAP and voice recognition systems.

      Since my machine learning background is stronger in empirical learning methods, it was good to see some of Mitchell's work on analytical methods. It brings to mind the trade offs between weak/general and strong/specific problem solving methods. Empirical machine learning seem to correspond to the weak method, where much less domain knowledge is required to perform the learning task, but requires a great deal more training examples to learn effectively. Analytical and inductive learning methods require a near perfect and complete domain theory but require far fewer examples. I wonder where a system like FOIL (Quinlan) would fall. While it does seem to be a weak method, it can formulate rules with surprisingly few training examples, compared to other empirical methods.

      While not a global solution to the merging of rule sets in LEAP, couldn't each instance of LEAP utilize a central store of knowledge inside the same organization? This could help solve the problem Mitchell addresses in a much simpler way than his proposed solution. As rules were added by each LEAP instance, the rules would become available to all the other instances running rather than each keeping it's own private rule set. Another approach would be to queue the training examples identified by each separate LEAP instance for learning by a central LEAP system. Each time LEAP is started by a designer, it could start it from this central LEAP. Hrm... same idea, different approach and run-time conditions. Copying the knowledge base each time could be very slow. I'll stick with the central store of knowledge idea, yeah, that's the ticket.


* Tom M. Mitchell, Sridhar Mahadevan & Louis I. Steinberg, LEAP: A Learning Apprentice for VLSI Design. Proc. Int. Jnt. Conf. on AI, IJCAI-85, 1985, pp. 573-580.
Back to Top

 

by: Keith A. Pray
Last Modified: August 13, 2004 8:01 PM
© 2004 - 1975 Keith A. Pray.
All rights reserved.