Keith A. Pray - Professional and Academic Site
About Me
·
·
·
LinkedIn Profile Facebook Profile GoodReads Profile
Professional
Academic
Teaching
                                          
Printer Friendly Version

Intro ] [ 26-Edison ] [ 27-LEAP ] [ 28-Plan Compilation ]

Up: AI in Design ]

Critique: LEAP: A Learning Apprentice for VLSI Design
Mitchell, Mahadevan, Steinberg *

      I found the Discussion section far more interesting than the details of how LEAP captures training examples and forms general rules from them. It was interesting to see the correspondence between features of LEAP and some machine learning systems common today. Take speech recognition for example. Many such systems require a training phase when first used. Then, as the system is used more and more it learns to better recognize a particular speech pattern through continuously identifying training examples. This is done in much the same way LEAP identifies them. While the actual methods by which training examples are used to learn is very different, the degree of similarity between such systems is amazing considering the fairly domain specific nature of LEAP and voice recognition systems.

      Since my machine learning background is stronger in empirical learning methods, it was good to see some of Mitchell's work on analytical methods. It brings to mind the trade offs between weak/general and strong/specific problem solving methods. Empirical machine learning seem to correspond to the weak method, where much less domain knowledge is required to perform the learning task, but requires a great deal more training examples to learn effectively. Analytical and inductive learning methods require a near perfect and complete domain theory but require far fewer examples. I wonder where a system like FOIL (Quinlan) would fall. While it does seem to be a weak method, it can formulate rules with surprisingly few training examples, compared to other empirical methods.

      While not a global solution to the merging of rule sets in LEAP, couldn't each instance of LEAP utilize a central store of knowledge inside the same organization? This could help solve the problem Mitchell addresses in a much simpler way than his proposed solution. As rules were added by each LEAP instance, the rules would become available to all the other instances running rather than each keeping it's own private rule set. Another approach would be to queue the training examples identified by each separate LEAP instance for learning by a central LEAP system. Each time LEAP is started by a designer, it could start it from this central LEAP. Hrm... same idea, different approach and run-time conditions. Copying the knowledge base each time could be very slow. I'll stick with the central store of knowledge idea, yeah, that's the ticket.


* Tom M. Mitchell, Sridhar Mahadevan & Louis I. Steinberg, LEAP: A Learning Apprentice for VLSI Design. Proc. Int. Jnt. Conf. on AI, IJCAI-85, 1985, pp. 573-580.

Intro
01-DPMED
02-Dominic
03-DSPL Air-Cyl
04-Pride
05-COSSACK
06-MICOM-M1
07-Configuration Survey
08-Dynamic CSP
09-MOLGEN
10-Failure Handling
11-VT
12-Conflict Resolution
13-Cooperative Negotiation
14-Negotiated Search
15-Multiagent Design
16-Prototypes
17-CBR Survey
18-PROMPT
19-A Design
20-Bogart
21-Cadet
22-Argo
23-Analogy Creativity Survey
24-Algorithm Design
25-AM
26-Edison
27-LEAP
28-Plan Compilation
29-ML Survey
30-Strain Gauge
31-Grammar
32-Config GA
33-Functional First
34-Functional CBR
35-Functional Survey
36-Models
37-First Principles
38-Config Spaces
39-Task Analysis

by: Keith A. Pray
Last Modified: August 13, 2004 8:01 PM
© 2004 - 1975 Keith A. Pray.
All rights reserved.

Current Theme: 

Kapowee Hosted | Kapow Generated in 0.009 second | XHTML | CSS