Neural Networks Algorithms, Applications,and Programming Techniques - James A. Freeman.pdf
(
10350 KB
)
Pobierz
COMPUTATION AND NEURAL SYSTEMS SERIES
SERIES EDITOR
Christof Koch
California Institute of Technology
EDITORIAL ADVISORY BOARD MEMBERS
Dana Anderson
University of Colorado, Boulder
Michael Arbib
University of Southern California
Dana Ballard
University of Rochester
James Bower
California Institute of Technology
Gerard Dreyfus
Ecole Superieure de Physique el de
Chimie Industrie/les de la Ville de Paris
Rolf Eckmiller
University of Diisseldorf
Kunihiko Fukushima
Osaka University
Walter Heiligenberg
Scripps Institute of Oceanography,
La Jolla
Shaul Hochstein
Hebrew University, Jerusalem
Alan Lapedes
Los Alamos National Laboratory
Carver Mead
California Institute of Technology-
Guy
Orban
Catholic University of Leuven
Haim Sompolinsky
Hebrew University, Jerusalem
John Wyatt, Jr.
Massachusetts Institute of Technology
The series editor, Dr. Christof Koch, is Assistant Professor of Computation and Neural
Systems at the California Institute of Technology. Dr. Koch works at both the biophysical
level, investigating information processing in single neurons and in networks such as
the visual cortex, as well as studying and implementing simple resistive networks for
computing motion, stereo, and color in biological and artificial systems.
Neural Networks
Algorithms, Applications,
and Programming Techniques
James A. Freeman
David M. Skapura
Loral Space Information Systems
and
Adjunct Faculty, School of Natural and Applied Sciences
University of Houston at Clear Lake
TV
Addison-Wesley Publishing Company
Reading, Massachusetts • Menlo Park, California • New York
Don Mills, Ontario • Wokingham, England • Amsterdam • Bonn
Sydney • Singapore • Tokyo • Madrid • San Juan • Milan • Paris
Library of Congress Cataloging-in-Publication Data
Freeman, James A.
Neural networks : algorithms, applications, and programming techniques
/ James A. Freeman and David M. Skapura.
p.
cm.
Includes bibliographical references and index.
ISBN 0-201-51376-5
1. Neural networks (Computer science) 2. Algorithms.
I. Skapura, David M. II. Title.
QA76.87.F74 1991
006.3-dc20
90-23758
CIP
Many of the designations used by manufacturers and sellers to distinguish their products are claimed
as trademarks. Where those designations appear in this book, and Addison-Wesley was aware of a
trademark claim, the designations have been printed in initial caps or all caps.
The programs and applications presented in this book have been included for their instructional
value. They have been tested with care, but are not guaranteed for any particular purpose. The
publisher does not offer any warranties or representations, nor does it accept any liabilities with
respect to the programs or applications.
Copyright ©1991 by Addison-Wesley Publishing Company, Inc.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system,
or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or
otherwise, without the prior written permission of the publisher. Printed in the United States of
America.
1 2 3 4 5 6 7 8 9 10-MA-9594939291
R
The appearance of digital computers and the development of modern theories
of learning and neural processing both occurred at about the same time, during
the late 1940s. Since that time, the digital computer has been used as a tool
to model individual neurons as well as clusters of neurons, which are called
neural networks. A large body of neurophysiological research has accumulated
since then. For a good review of this research, see
Neural and Brain Modeling
by Ronald J. MacGregor [21]. The study of artificial neural systems (ANS) on
computers remains an active field of biomedical research.
Our interest in this text is not primarily neurological research. Rather, we
wish to borrow concepts and ideas from the neuroscience field and to apply them
to the solution of problems in other areas of science and engineering. The ANS
models that are developed here may or may not have neurological relevance.
Therefore, we have broadened the scope of the definition of ANS to include
models that have been
inspired
by our current understanding of the brain, but
that do not necessarily conform strictly to that understanding.
The first examples of these new systems appeared in the late 1950s. The
most common historical reference is to the work done by Frank Rosenblatt on
a device called the
perceptron.
There are other examples, however, such as the
development of the Adaline by Professor Bernard Widrow.
Unfortunately, ANS technology has not always enjoyed the status in the
fields of engineering or computer science that it has gained in the neuroscience
community. Early pessimism concerning the limited capability of the perceptron
effectively curtailed most research that might have paralleled the neurological
research into ANS. From 1969 until the early 1980s, the field languished. The
appearance, in 1969, of the book,
Perceptrons,
by Marvin Minsky and Sey-
mour Papert [26], is often credited with causing the demise of this technology.
Whether this causal connection actually holds continues to be a subject for de-
bate. Still, during those years, isolated pockets of research continued. Many of
the network architectures discussed in this book were developed by researchers
who remained active through the lean years. We owe the modern renaissance of
neural-net work technology to the successful efforts of those persistent workers.
Today, we are witnessing substantial growth in funding for neural-network
research and development. Conferences dedicated to neural networks and a
CLEMSON
UNIVERSITY
vi
Preface
new professional society have appeared, and many new educational programs
at colleges and universities are beginning to train students in neural-network
technology.
In 1986, another book appeared that has had a significant positive effect
on the field.
Parallel Distributed Processing (PDF),
Vols. I and II, by David
Rumelhart and James McClelland [23], and the accompanying handbook [22]
are the place most often recommended to begin a study of neural networks.
Although biased toward physiological and cognitive-psychology issues, it is
highly readable and contains a large amount of basic background material.
POP
is certainly not the only book in the field, although many others tend to
be compilations of individual papers from professional journals and conferences.
That statement is not a criticism of these texts. Researchers in the field publish
in a wide variety of journals, making accessibility a problem. Collecting a series
of related papers in a single volume can overcome that problem. Nevertheless,
there is a continuing need for books that survey the field and are more suitable
to be used as textbooks. In this book, we attempt to address that need.
The material from which this book was written was originally developed
for a series of short courses and seminars for practicing engineers. For many
of our students, the courses provided a first exposure to the technology. Some
were computer-science majors with specialties in artificial intelligence, but many
came from a variety of engineering backgrounds. Some were recent graduates;
others held Ph.Ds. Since it was impossible to prepare separate courses tailored to
individual backgrounds, we were faced with the challenge of designing material
that would meet the needs of the entire spectrum of our student population. We
retain that ambition for the material presented in this book.
This text contains a survey of neural-network architectures that we believe
represents a core of knowledge that all practitioners should have. We have
attempted, in this text, to supply readers with solid background information,
rather than to present the latest research results; the latter task is left to the
proceedings and compendia, as described later. Our choice of topics was based
on this philosophy.
It is significant that we refer to the readers of this book as
practitioners.
We expect that most of the people who use this book will be using neural
networks to solve real problems. For that reason, we have included material on
the application of neural networks to engineering problems. Moreover, we have
included sections that describe suitable methodologies for simulating neural-
network architectures on traditional digital computing systems. We have done
so because we believe that the bulk of ANS research and applications will
be developed on traditional computers, even though analog VLSI and optical
implementations will play key roles in the future.
The book is suitable both for self-study and as a classroom text. The level
is appropriate for an advanced undergraduate or beginning graduate course in
neural networks. The material should be accessible to students and profession-
als in a variety of technical disciplines. The mathematical prerequisites are the
Plik z chomika:
Yohoho25
Inne pliki z tego folderu:
Alpaydin - Introduction to Machine Learning (MIT, 2004).pdf
(37036 KB)
An Intro to Computer Simulation Methods - Applns to Physical Systems 3rd ed. - H. Gould, et al., [poor scan, dp] (Pearson, 2007) WW.pdf
(41874 KB)
An Introduction to Neural Networks (Math Computer Science).PDF
(1293 KB)
An Introduction to Neural Networks - Patrick van der Smagt.pdf
(1293 KB)
An Introduction to Neural Networks 8th ed. - B. Krose, P. Van der Smagt (1996) WW.pdf
(1293 KB)
Inne foldery tego chomika:
Algorithms & Data Structures
Computer Vision & Graphics & Image Processing
Game Programming
HDL Books - VHDL FPGA CPLD Verilog Digital Electronics eBook
Low Level
Zgłoś jeśli
naruszono regulamin