Skip to content
Scan a barcode
Scan
Hardcover Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought Book

ISBN: 0465051545

ISBN13: 9780465051540

Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought

Select Format

Select Condition ThriftBooks Help Icon

Recommended

Format: Hardcover

Condition: Very Good*

*Best Available: (ex-library)

$7.39
Save $22.61!
List Price $30.00
Almost Gone, Only 1 Left!

Book Overview

Readers of earlier works by Douglas Hofstadter will find this book a natural extension of his style and his ideas about creativity and analogy; in addition, psychologists, philosophers, and artificial-intelligence researchers will find in this elaborate web of ingenious ideas a deep and challenging new view of mind.

Customer Reviews

5 ratings

Wonderful but quite dry in parts

This book is, as others have commented, different from DH's other more entertaining books. It is a serious attempt to discuss the real issues and difficulties with AI research. There is a lot of quite dry material and in places it is repetitive.It provides terrific insight into the problem of imitating human thinking at a deep level, and I found it very rewarding. It was also very interesting to follow the threads of how he went about doing research, and what he thought of other AI research. His views of various flavours of AI research were very instructive and inightful I thought. In summary a good book, but this is not (high quality) brain candy like Godel Escher Bach etc.

Novel approaches to artificial intelligence

This book has received some poor reviews and been unfairly compared to Hofstader's previous book, Goedel, Escher, Bach. While both are books about cognitive science, the former is a book of philosophy -- it's written for the layperson and discusses the topic in relatively abstract terms. This book is no less interesting for the fact that it deals in concretes: it discusses the actual architecture, the design of the programs which simulate the intelligent processes described so well in GEB. Those with a background in computer programming will especially appreciate the novelty of Hofstadter's architecture, and will perhaps be inspired to implement their own. Those without a background probably won't have any trouble visualizing the processes for themselves. The book is written as a collection of essays, so my recommendation is: skip around. Read whatever interests you, and think about it for a while. This book is neither a narrative nor an exhaustive reference, and you won't enjoy it if you try to read it as either.

Artificial Intelligence, Redefined

Where does meaning enter the picture in artificial intelligence? How can we say that a machine possesses understanding? Where, and how, does such understanding happen? These are among the deepest and hardest questions faced by the field, which, as many skeptics claim, has not yielded much about them so far. Consider, for instance, that most current research in AI can be roughly classified over two distinct classes:(1) Low-level perception. The best example of this type of work comes obviously from computer vision systems. These systems, given a set of input images, usually extract some important information from this input, generating, well, other images (i.e. depth image, edge contours etc.). But this extracted information is usually on a still very low, meaningless, level, to be used by, for instance, a theorem-proving system. To make it clear to all readers what is meant by "meaning", consider the information-processing that must occur whenever an animal, given its massive sensorial information, perceives danger. Going from a set of images and sounds to a feeling of danger involves extracting meaning from the original input, and this is not what is done by current low-level perception projects. It is almost as if these perceptual processes "delegate" the extraction of meaning to another upcoming process. To get into the meaning of a situation, low-level perceptual processes are not enough; there is a clear need for further perceptual processing.(2) GOFAI symbolic manipulation. This is the other side of the AI coin, dubbed by philosopher John Haugeland as GOFAI, for "good-old-fashioned artificial intelligence", where programs usually handle (syntactically) a representation that supposedly should have been formed by a perceptual process. These systems, such as theorem-proving systems, chess playing, and others, do perform some impressive feats, but they do not have a clue about the semantics of their symbol manipulation. As an example, consider the following predicate-calculus statement: (philosopher (Socrates)). We all fully understand what that means, but what about the machine that executes it? Does it have any meaning to the machine? It is obvious that the answer is no, for that is just a syntactic symbol, as meaningful to the computer as (XzE (GgGggGG)), which doesn't mean anything. But how can a system that only manipulates meaningless syntactic symbols posses any meaning on those symbols? This seems to be an intrinsic problem to GOFAI projects. Both of these avenues of AI research seem to be based on an unspoken hypothesis of a "center of meaning" arising in the brain (maybe the mind's eye?). The low-level perceptual processes should operate on information that has yet to reach such place, and GOFAI systems in turn handle information that seems to have long reached it. The problem is, what happens at the point of crossing the line? Nobody really knows.Maybe, then, there is no such line after all - as Hofstadter clearl

A Subdued Hoftadter, But Not a Bad Hofstadter

For those who are familiar with Hofstadter's style in Godel Escher Bach, as well as Matamagical Themas, this might be a shock. The normally outrageous style of Hofstadter is quite subdued in this tome, as you are taken on a journey of his research projects. I enjoyed reading about the goals of he and his FARG collaborators. Even more interesting, to me at least, were his thoughts on the state of Artificial Intelligence... err I mean Cognitive Science, and what he believes are some fundamental flaws in the philosophies of many of his contemporaries.If you're up to reading research-type papers on some very interesting projects, this book is very worthwhile.

"Analogies" - Bucks the status quo in the field of AI !

For a number of years now, I've followed the works of Douglas Hofstadter. I was instantly hooked when I first read his column Metamagical Themas, which ran in Scientific American from 1981 through 1983. In that column, he tackled all manner of thought provoking subjects. In the interveneing years, he has released some pretty meme-rich tomes, none for the faint of heart. From the far-out thought experiments of The Minds Eye to the Pulitzer Prize winning Godel, Escher, Bach: An Eternal Golden Braid, to his latest (reviewed here), Mr. Hofstadter always keeps the reader on his or her mental toes. Many researchers in the field of Artificial Intelligence take the approach of attempting to mimick the behavior of people with computer programs. On the surface, this might seem a logical direction to take, and so AI researchers have a tendency to go and dream up batteries of tests that aim to characterize some area of human behavior, then the sum up all the results and come up with the range of responses that fits cozily into their bell-shaped curves. Armed with what they've assured themselves is normal human response to all their scenerios, the go off and attempt to write computer programs that react the same way as John or Jane Doe did. Once they've gotten a program that generally responds like 'most of the human subjects' did, they usually beef it up by programming in more and more details about the domain of the scenerio at hand. A good example of this line of thought is Deep Blue, IBM's massively parallel chess playing supercomputer. What Douglas Hofstader's latest book points out is that this sort of thinking about artificial intelligence is the brute force approach. What you end up with is a computer that knows a *lot* about a particular domain (i.e. chess), but has no other redeeming features whatsoever. Deep Blue could probably whip 99.9% of the human population at chess, but it can't even begin recognize the elegance of a particular strategy (such as the sicilian defense) because it has no ability to make analogies to other domains. The ongoing thread of Hofstadter's work has always been quite clear. He's interested in understanding human thought, not mimicking it. In his latest work, Analogies, he and his FARGonaouts (students at his Fluid Analogies Research Group - FARG) introduce us to several of their long term projects that uncover some of the 'fundamental mechanisms of thought'. His usual modus operandi is to examine the problem space of extremely simple microdomains - problem sets having very few parameters, but that scale up well into higher domains with the analogies it evokes. For instance, he describes a very simple game called "TableTop" in which two players face each other across a table in a cafe. On both sides of the table are arranged various objects of the TableTop domain - knives, spoons, cups, plates, salt and pepper shakers, etc. The game begins when one player touches
Copyright © 2024 Thriftbooks.com Terms of Use | Privacy Policy | Do Not Sell/Share My Personal Information | Cookie Policy | Cookie Preferences | Accessibility Statement
ThriftBooks® and the ThriftBooks® logo are registered trademarks of Thrift Books Global, LLC
GoDaddy Verified and Secured