Skip to content
Scan a barcode
Scan
Paperback Perceptrons: An Introduction to Computational Geometry Book

ISBN: 0262631113

ISBN13: 9780262631112

Perceptrons: An Introduction to Computational Geometry

Select Format

Select Condition ThriftBooks Help Icon

Recommended

Format: Paperback

Condition: Good

$21.19
Save $13.81!
List Price $35.00
Almost Gone, Only 1 Left!

Book Overview

The first systematic study of parallelism in computation by two pioneers in the field.Reissue of the 1988 Expanded Edition with a new foreword by L on BottouIn 1969, ten years after the discovery of... This description may be from another edition of this product.

Customer Reviews

2 ratings

Deja vu?

In 1958, Cornell psychologist Frank Rosenblatt proposed the 'perceptron', one of the first neural networks to become widely known. A retina sensory layer projected to an association layer made up of threshold logic units which in turn connected to the third layer, the response layer. If two groups of patterns are linearly separable then the perceptron network works well in learning to classify them in separate classes. In this reference, Minsky and Papert show that assuming a diameter-limited sensory retina, a perceptron network could not always compute connectedness, ie, determining if a line figure is one connected line or two separate lines. Extrapolating the conclusions of this reference to other sorts of neural networks was a big setback to the field at the time of this reference. However, it was subsequently shown that having an additional 'hidden' layer in the neural network overcame many of the limitations. This reference figures so prominently in the field of neural networks, and is often referred to in modern works. But of even greater significance, the history of the perceptron demonstrates the complexity of analyzing neural networks. Before this reference, artificial neural networks were considered terrific, after this reference limited, and then in the 1980s terrific again. But at the time of this writing, it is realized that despite physiological plausibility, artificial neural networks do not scale well to large or complex problems that brains can easily handle, and artificial neural networks as we know them may actually be not so terrific.

Seminal AI book

This is a seminal work in the field of Artificial Intelligence. Following an initial period of enthusiasm, the field encountered a period of frustration and disrepute. Minksy and Papert's 1969 book summed up this general feeling of frustration among researchers by demonstrating the representational limitations of Perceptrons (used in neural networks). Their arguments were very influential in the field and accepted by most without further analysis.I found this book to be generally easy to read. Despite being written in 1969, it is still very timely.
Copyright © 2024 Thriftbooks.com Terms of Use | Privacy Policy | Do Not Sell/Share My Personal Information | Cookie Policy | Cookie Preferences | Accessibility Statement
ThriftBooks® and the ThriftBooks® logo are registered trademarks of Thrift Books Global, LLC
GoDaddy Verified and Secured