e99 Online Shopping Mall

Geometry.Net - the online learning center Help  
Home  - Math Discover - Fuzzy Math (Books)

  Back | 41-58 of 58

click price to see details     click image to enlarge     click link to go to the store

$26.90
41. Design of Survivable Networks
 
$179.00
42. Artificial Neural Nets and Genetic
 
$74.80
43. The Nature of Statistical Learning
 
44. Mathematical Aspects of Spin Glasses
$45.00
45. Intelligence Through Simulated
$33.00
46. Artificial Neural Networks for
 
$107.00
47. Neural Networks in Design and
$75.99
48. Feedforward Neural Network Methodology
$36.00
49. Control of Uncertain Sampled-Data
$118.31
50. Biological Neural Networks: the
$179.00
51. Evaluation of Uncertainties and
$72.95
52. Neural Networks and Analog Computation:
$64.95
53. Optimization Techniques, Volume
 
54. Neural Networks
$85.86
55. Algorithmic Decision Theory: First
$64.94
56. Dependability of Engineering Systems:
$67.25
57. Scalable Uncertainty Management:
$98.00
58. Theory of Randomized Search Heuristics:

41. Design of Survivable Networks (Lecture Notes in Mathematics)
by Mechthild Stoer
Perfect Paperback: 206 Pages (1993-01-26)
list price: US$46.00 -- used & new: US$26.90
(price subject to change: see help)
Asin: 3540562710
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
The problem of designing a cost-efficient network thatsurvives the failure of one or more nodes or edges of thenetwork is critical to moderntelecommunicationsengineering. The method developed in this book isdesignedto solve such problems to optimality. In particular, acutting plane approach is described, based on polyhedralcombinatorics, that is ableto solve real-world problems ofthis type in short computation time. Theseresults are ofinterest for practitioners in the area ofcommunicationnetwork design.The book is addressed especially to the combinatorialoptimization community, but also to those who want to learnpolyhedral methods. In addition, interesting new researchproblemsare formulated. ... Read more


42. Artificial Neural Nets and Genetic Algorithms: Proceedings of the International Conference in Innsbruck, Austria, 1993
 Paperback: 737 Pages (1993-06-18)
list price: US$179.00 -- used & new: US$179.00
(price subject to change: see help)
Asin: 3211824596
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Artificial neural networks and genetic algorithms both are areas of research which habe their origins in mathematical models constructed in order to gain understanding of important natural processes. By focussing on the process models rather than the processes themselves, significant new computational techniques have evolved which have found application in a large number of diverse fields. This diversity is reflected in the topics which are the subjects of contributions to this volume. There are contributions reporting theoretical developments in the design of neural networks, and in the management of their learning. In a number of contributions, applications to speech recoginition tasks, control oif industrial processes as well as to credit scoring, and so on, are reflected. Regarding genetic algorithms, several methodological papers consider how genetic algorithms can be improved using an experimental approach, as well as by hybridizing with other useful techniques such as tabu search. The closely related area of classifier systems also receives a significant amount of coverage, aiming at better ways for their implementation. Further, while there are many contributions which explore ways in which genetic algorithms can be applied to real problems, nearly all involve some understanding of the context in order to apply the genetic algorithm paradigm more successfully. That this can indeed be done is evidenced by the range of applications covered in these volume. ... Read more


43. The Nature of Statistical Learning Theory
by Vladimir N. Vapnik
 Hardcover: 188 Pages (1998-12-14)
list price: US$64.95 -- used & new: US$74.80
(price subject to change: see help)
Asin: 0387945598
Average Customer Review: 4.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
The aim of this book is to discuss the fundamental ideas which lie behind the statistical theory of learning and generalization. It considers learning from the general point of view of function estimation based on empirical data. Omitting proofs and technical details, the author concentrates on discussing the main results of learning theory and their connections to fundamental problems in statistics. These include: - the general setting of learning problems and the general model of minimizing the risk functional from empirical data - a comprehensive analysis of the empirical risk minimization principle and shows how this allows for the construction of necessary and sufficient conditions for consistency - non-asymptotic bounds for the risk achieved using the empirical risk minimization principle - principles for controlling the generalization ability of learning machines using small sample sizes - introducing a new type of universal learning machine that controls the generalization ability. ... Read more

Customer Reviews (5)

5-0 out of 5 stars Remarkably readable tour of one path into machine learning
This book is meant to be a popularization, of sorts, of the material covered in the considerably more formal and detailed treatment, "Statistical Learning Theory."Some of the other reviewers have commented on how Vapnik's subjective perspective is not as evenhanded as they would like.However, I would not have it any other way. I really enjoyed the fact that he has an organic understanding of the field and he expresses his opinions about it in a relatively unvarnished way; it is undeniable that he played a central role in it.Most readers of this kind of thing should be mature enough to deal with the subjectivity that an author must have in talking about the relevance of their own life's work.He is a bit dismissive of work that he believes is either competitive or is derivative/overlapping with his own (as other reviewers pointed out, this includes nearly all of the American work in the 1980's and 90's).

The benefits of such subjectivity is a framing of the problems of machine learning in the context of the grand scheme of mathematics/statistics.The book has many insights that would usually be reserved only for lectures.Since it is subjective, it is not PC and he gives his (rather valuable) opinions and insights.I really appreciated that.The connections to philosophical work in induction (Kant, Popper) and the formalization of this into a study of statistical induction was a brilliant section, though it was clear that the argument was more a interpretation for the risk formulation than an encoding of the philosophical texts.You either find that sort of thing interesting or you don't.

In summary, a unique portal into understanding Vapnik's extremely insightful point of view on the subject.He has obviously thought very deeply about topics that he's writing about, and it came through.

3-0 out of 5 stars New to Field of Learning Theory
I am relatively new to statistical learning theory, though with a solid background in supporting theories and a Master's in Engineering. I found the text readable. I appreciate the historical perspective and the development of concepts by the author. I was generally able to grasp Vapnick's theories and explanations, though often after rereading passages many times.

Simple examples would significantly aid the readability and understandability of the text - akin to the way we teach our children. We don't describe all the attributes of a rabbit, we point to a picture of a rabbit and say "bunny". After two or three examples of this my children know the abstract concept of a rabbit (without me having to describe a small, four legged creature with long ears, etc. and then answering the inevitable question of "What's four legged creature mean daddy?"). Particularly with a text about learning theory, one would think it would be full of such examples - at least from a pedagogical point of view.

Initially, I didn't mind Vapnick's editorializing, but after a while I find it annoying - I'm sure he didn't single-handedly invent the entire field of statistical learning theory, but he sure doesn't miss any opportunities to tell the reader that he believes he has.

3-0 out of 5 stars worth reading
A good, albeit highly idiosyncratic, guide to Statistical Learning. The highly personal account of the theory is both the strong point and the drawback of the treatise. On one side, Vapnick never loses sight of the big picture, and gives illuminating insights and formulations of the "basic problems" (as he calls them), that are not found in any other book. The lack of proofs and the slightly erratic organization of the topic make for a brisk, enjoyable reading. On the minus side, the choice of the topics is very biased. In this respect, the book is a self-congratulatory tribute by the author to himself: it appears that the foundations of statistical learning were single-handedly laid by him and his collaborators. This is not really the case. Consistency of the Empircal Risk Measure is rather trivial from the viewpoint of a personal trained in asymptotic statistics, and interval estimators for finite data sets are the subject of much advanced statistical literature. Finally, SVMs and neural nets are just a part of the story, and probably not the most interesting.
In a nutshell, what Vapnick shows, he shows very well, and is able to provide the "why" of things as no one else. What he doesn't show... you'll have to find somewhere else (the recent Book of Friedman Hastie & Tibs is an excellent starting point).
A last remark. The book is rich in grammatical errors and typos. They could have been corrected in the second edition, but do not detract from the book's readability.

5-0 out of 5 stars A very nice book to get ideas on support vector machines
This is a very readable book by an authority on this subject. The bookstarts with the statistical learning theory, pioneered by the author andco-worker's work, and gradually leads to the path of discovery of supportvector machines. An excellent and distinctive property of support vectormachines is that they are robust to small data perturbation and have goodgeneralization ability with function complexity being controlled by VCdimension. The treatment of nonlinear kernel classification and regressionis given for the first time in the first edition.The 2nd edition includessignificant updates including a separate chapter on support vectorregression as well as a section on logistic regression using the supportvector approach. Most computations involved in this book can be implementedusing a quadratic programming package. The connections of support vectormachines to traditional statisticalmodeling such as kernel density andregression and model selection are also discussed. Thus, this book will bean excellent starting point for learning support vector machines.

5-0 out of 5 stars A research field described by the man who invented it
Vapnik and collaborators have developed the field of statistical learning theory underlying recent advances in machine learning and artificial intelligence (e.g. support vector machines). This book almost accomplishesthe formidable task of comprehensibly describing the essential ideas oflearning theory to non-statisticians. It contains ample theorems but almostno proofs. ... Read more


44. Mathematical Aspects of Spin Glasses and Neural Networks (Progress in Probability)
by Anton Bovier, Pierre Picco
 Hardcover: 400 Pages (1997-12)

Isbn: 3764338636
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Aimed at graduates and potential researchers, this is a comprehensive introduction to the mathematical aspects of spin glasses and neural networks. It should be useful to mathematicians in probability theory and theoretical physics, and to engineers working in theoretical computer science. ... Read more


45. Intelligence Through Simulated Evolution: Forty Years of Evolutionary Programming (Wiley Series on Intelligent Systems)
by Lawrence J. Fogel
Hardcover: 162 Pages (1999-08-02)
list price: US$105.00 -- used & new: US$45.00
(price subject to change: see help)
Asin: 047133250X
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
A unique, one-stop reference to the history, technology, and application of evolutionary programming

Evolutionary programming has come a long way since Lawrence Fogel

first proposed in 1961 that intelligence could be modeled on the natural process of evolution. Efforts to apply this innovative approach to artificial intelligence have also evolved over the years, and the advent of fast desktop computers capable of solving complex computational problems has spawned an explosion of interest in the field.

Offering the unique perspective of one of the inventors of evolutionary programming, this remarkable work traces forty years of developments in the field. Dr. Fogel consolidates a wealth of information and hard-to-find figures from across the literature, providing comprehensive coverage of the evolutionary programming approach to simulated evolution. This includes both an updated, condensed version of his bestselling 1966 work, Artificial Intelligence Through Simulated Evolution (with Owens and Walsh), and a thorough discussion of the history, technology, and methods of machine learning from 1970 to the present.

This important resource features clear, up-to-date explanations of how the simulation of evolutionary processes allows machines to learn to solve new problems in new ways. And it helps readers make the leap to generating intelligent systems-extending the discussion to neural networks, fuzzy logic, and genetic algorithms development. Engineers and computer scientists in all areas of machine learning will gain invaluable insight into existing and emerging applications and obtain ample ideas to draw upon in future research. ... Read more


46. Artificial Neural Networks for Civil Engineers: Fundamentals and Applications
Paperback: 216 Pages (1997-06)
list price: US$33.00 -- used & new: US$33.00
(price subject to change: see help)
Asin: 0784402256
Canada | United Kingdom | Germany | France | Japan

47. Neural Networks in Design and Manufacturing
by Jun Want
 Hardcover: 292 Pages (1993-12)
list price: US$107.00 -- used & new: US$107.00
(price subject to change: see help)
Asin: 981021281X
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Over the past few years, there has been a surge of researchers on artificial neural networks. Although the thrust was originally from computer scientifics and electrical engineers, neural network research has attracted researchers in the fields of operations research, operations management and industrial engineering. Despite the huge volume of recent publications devoted to neural network reseach, there is no single monograph addressing the potential roles of artificial neural networks for design and manufacturing. The focus of this book is on the applications of neural network concepts and techniques to design and manufacturing. This book reviews the state of the art of the research activities, highlights the recent advances in research and development, and discusses the potential directions and future trends along this stream of research. The potential readers of this book will include, but are not limited to, beginners, professionals and practitioners in industries who are applying neural networks to design and manufacturing.The topics include group technology, assembly line balancing, material handling, quality control and assurance, cutting parameter optimization, planning and scheduling, facility design and layout, process monitoring and control, and others. ... Read more


48. Feedforward Neural Network Methodology (Springer Series in Statistics)
by Terrence L. Fine
Hardcover: 340 Pages (1999-06-11)
list price: US$109.00 -- used & new: US$75.99
(price subject to change: see help)
Asin: 0387987452
Average Customer Review: 5.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
This monograph provides a thorough and coherentintroduction to the mathematical properties of feedforward neuralnetworks and to the computationally intensive methodology that hasenabled their highly successful application to complex problems ofpattern classification, forecasting, regression, and nonlinear systemsmodeling. The reader is provided with the information needed to makepractical use of the powerful modeling and design tool of feedforwardneural networks, as well as presented with the background needed tomake contributions to several research frontiers. This work istherefore of interest to those in electrical engineering, operationsresearch, computer science, and statistics who would like to usenonlinear modeling of stochastic phenomena to treat problems ofpattern classification, forecasting, signal processing, machineintelligence, and nonlinear regression. T.L. Fine is Professor ofElectrical Engineering at Cornell University. ... Read more

Customer Reviews (2)

5-0 out of 5 stars Deep knowledge about Neural Networks inner workings
If you're serious about understanding how Neural Networks work (Feedforward NN), then this book is a must have. It is a one of a kind resource in terms of the mathematical width and depth that is contained related to the topic.

It goes deeply into the mathematical reasoning and theorems that make NN learn and provides some sample Matlab code that can add to your understanding on implementation.

This book is not for everyone, you need to like and understand calculus to some degree to be able to follow it. This is a great acquisition if your interest is to gain a graduate course level understanding on the subject.

5-0 out of 5 stars Great for practical applications
This book provides a nice balance of math, examples, and MATLAB computer. It was of great help for me to understand and code up my own forms of ANNs. The example code provided is a really nice feature. It also puts a great deal of emphasis on relating statistics with ANNs. ... Read more


49. Control of Uncertain Sampled-Data Systems (Systems & Control: Foundations & Applications)
by Geir E. Dullerud
Hardcover: 177 Pages (1995-11-29)
list price: US$89.95 -- used & new: US$36.00
(price subject to change: see help)
Asin: 0817638512
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Hybrid systems are formed when continuous and discrete-time systems are interconnected. The author's goal in this book is to provide a detailed treatment of uncertainty analysis for sampled-data hybrid systems in the context of robust control theory. The book has a general scope in that it offers a widely applicable and unifying viewpoint that considers a large class of structured uncertainty problems arising in control, whether the system be hybrid, discrete-time or continuous-time.

In the body of the text, operator theoretic tools and techniques are developed to address the central design issues of performance and stabilization in the presence of structured uncertainty classes. The methods are applied to exact analysis of hybrid sampled-data systems in the H-infinity or Hilbert space setting, with the focus being u-theory and its generalizations to time-varying uncertainty structures. The mathematical machinery and framework presented provide a unified approach to studying performance and uncertainty, and is applicable to both standard and sampled-data systems.

The material of the book is of both theoretical and engineering interest: from a theory perspective the reader can expect to gain intuitive and powerful techniques for treating robust performance problems; practitioners may obtain methods that can be directly implemented in engineering applications. ... Read more


50. Biological Neural Networks: the Hierarchical Concept of Brain Function
by Konstantin V. Baev
Hardcover: 273 Pages (1998-04-30)
list price: US$149.00 -- used & new: US$118.31
(price subject to change: see help)
Asin: 0817638598
Average Customer Review: 5.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
"Biological Neural Networks" presents a novel conceptual framework for neurobiology achieved by the application of control theory. This new paradigm provides unifying principles for understanding the functional construction of the nervous system. Konstantin Baev argues forcefully that all hierarchical levels of the nervous system are built according to the same functional principles, which are shown to underlie the highest forms of brain function. Each network hierarchy is structurally and functionally organized in such a way that a lower control system in the nervous system becomes the controlled object for a higher one, and each level of control possesses a model of behavior of its controlled object.

The book brings together for the first time the fields of neural networks (with its computational capabilities), control theory (with its hierarchical machinery), and neurobiology (with its plethora of enigmatic physiological functions, such as inborn and acquired automatic behavior). The function of the cerebellum, the limbic system, and the cortico-basal ganglia-thalamocortical loops are analyzed within this new hierarchical framework. Clinical applications include an original explanation of Parkinson's disease and suggested mechanisms of alleviating its symptoms by conducting functional neurosurgical procedures.

Baev writes with an interdisciplinary readership in mind. Neuroscientists, computer specialists and mathematicians, physicists and clinicians devoted to deciphering the way brain works will all find it fascinating and stimulating reading. ... Read more

Customer Reviews (3)

5-0 out of 5 stars Regarding Science-Ejected Vitalism, 1998:
Vitalism is a profoundly science-ejected concept, though many CAM or 'natural health' cabals falsely claim that vitalism survives scientific scrutiny.

One of my favorite passages from this book:

"the achievements of molecular biology in the twentieth century proved conclusively that it is not necessary to propose that life processes arise from some nonmaterial vital principle and cannot be explained entirely as physical and chemical phenomena. [E.g.] biological neural networks are created by nature, and the laws of nature should be applicable to them [p.003]."

-r.c.

5-0 out of 5 stars very captivating - a dazzling introduction
Karl A. Greene in the foreword asserted that after reading this book, one will never look at neurobiology and the human brain quite the same again, and I fully concur. Baev introduces a modular framework that fuses neurobiology with control theory and opens the portals for artificial intelligence to enter. Unleashing the powers of hierarchical modeling, his monograph presents a thoroughly conceptualized and truly captivating approach to understanding the functioning of the human brain. Two thumbs up, I had to read this book five times to fully understand it, but I enjoyed it every single time.

5-0 out of 5 stars very captivating - a dazzling introduction
Karl A. Greene in the foreword asserted that after reading this book, one will never look at neurobiology and the human brain quite the same again, and I fully concur. Baev introduces a modular framework that fuses neurobiology with control theory and opens the portals for artificial intelligence to enter. Unleashing the powers of hierarchical modeling, his monograph presents a thoroughly conceptualized and truly captivating approach to understanding the functioning of the human brain. Two thumbs up, I had to read this book five times to fully understand it, but I enjoyed it every single time. ... Read more


51. Evaluation of Uncertainties and Risks in Geology: New Mathematical Approaches for their Handling
by György Bardossy, János Fodor
Paperback: 222 Pages (2010-11-02)
list price: US$179.00 -- used & new: US$179.00
(price subject to change: see help)
Asin: 3642058337
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

High levels of uncertainty are a trademark of geological investigations, such as the search for oil, diamonds, and uranium. So business ventures related to geology, such as mineral exploration and mining, are naturally associated with higher risks than more traditional entrepreneurial ventures in industry and economy. There are also a number of dangerous natural hazards, e.g. earthquakes, volcanic activities, and inundations, that are the direct result of geological processes. It is of paramount interest to study them all, to describe them, to understand their origin and - if possible - to predict them. While uncertainties, geological risks and natural hazards are often mentioned in geological textbooks, conferences papers, and articles, no comprehensive and systematic evaluation has so far been attempted. This book, written at an appropriately sophisticated level to deal with complexity of these problems, presents a detailed evaluation of the entire problem, discussing it from both, the geological and the mathematical aspects.

... Read more

52. Neural Networks and Analog Computation: Beyond the Turing Limit (Progress in Theoretical Computer Science)
by Hava T. Siegelmann
Hardcover: 182 Pages (1998-12-01)
list price: US$99.00 -- used & new: US$72.95
(price subject to change: see help)
Asin: 0817639497
Average Customer Review: 3.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

The theoretical foundations of Neural Networks and Analog Computation conceptualize neural networks as a particular type of computer consisting of multiple assemblies of basic processors interconnected in an intricate structure. Examining these networks under various resource constraints reveals a continuum of computational devices, several of which coincide with well-known classical models. On a mathematical level, the treatment of neural computations enriches the theory of computation but also explicated the computational complexity associated with biological networks, adaptive engineering tools, and related models from the fields of control theory and nonlinear dynamics. The material in this book will be of interest to researchers in a variety of engineering and applied sciences disciplines. In addition, the work may provide the base of a graduate-level seminar in neural networks for computer science students.

... Read more

Customer Reviews (5)

4-0 out of 5 stars Discussion of the consequences, not the original proof
Skeptics wanting to see the original proof, and how such "machines" can exist as natural phenomena within the constraints of physics, should refer to the author's peer reviewed articles

H.T. Siegelmann, "Computation Beyond the Turing Limit," Science, 238(28), April 1995: 632-637

and

H.T. Siegelmann, "Analog Computational Power" Science 19 January 1996 271: 373

This book discusses the consequences, and the limitations of analog computation using neural networks.

4-0 out of 5 stars Elegant theoretical apparatus
This book provides a systematic overview of a beautiful theoretical apparatus that the author and collaborators have developed for describing the computational power of neural networks. It addresses neural networks from the standpoint of computational complexity theory, not machine learning.

A central issue that arises is what values the neural couplings can take on. The book outlines the consequences of various choices. Rational-valued neural networks turn out to be Turing machines, a contribution of general significance. The book shows (and perhaps unduly emphasizes) that irrational-valued couplings can yield Superturing computation, a result which has been controversial.

If irrational numbers can arise in a computational setting, then the work outlined here is clearly a major landmark that deserves the careful, systematic exposition the book provides. On the other hand, maybe irrational numbers are just not relevant to actual computational devices. (They certainly aren't yet.) If so, the book is still a worthwhile theoretical exercise leading to an elegant set of results. Even if one leans toward the latter option - and I would say that this is probably the vast majority - I don't think any of us really _know_ where the irrational numbers stand vis-a-vis our computational universe.

Even if you intuitively see that an infinitely rich source of information, which is what an irrational number provides, should yield Super-Turing computation, the book is still valuable. (If you don't have this intuition, think about it more!) There is a lot to be gleaned from the non-obvious (at least to me) details of how that intuition works itself out.

The book has more technical flaws. The author periodically states results without really explaining fully, or even at all. This leaves a good deal of work to the reader. I would expect to spend a few hours per page, here and there, though usually it will move quicker. A major issue is also the challenging notation, which is often more difficult than it needs to be. The book's introduction to advice turing machines is also insufficient; you'll need to do a bit of background reading if you don't know much about them.

1-0 out of 5 stars Lots of notation, little content
This book certainly claims to give much much more than what It actually provides. Trying to read this book, you'll have to swallow a formalism that unfortunately does not pay off. There is absolutely no revolutionary idea, just well known facts and pretention to do better than a TM but based on assumptions that by their sole existence, suffice to do better than any Turing machine, you don't need a whole book to say this. (namely, working with arbitrary precision).

1-0 out of 5 stars Cogently argued but fatally flawed
Some of this book is an interesting discussion of the boundries of computability. However, the book's central claim, that you can exceed the Turing limit, requires the storing of infinitely precise variables in a physical device. This is a physical impossibility which no amount of gratuitous logical notation will make go away. Even if you put aside the difficulties of measuring a value to infinite precision, quantum indeterminance and discontinuity will not allow any physical object to store or encode an infinitely precise value in any fashion. Once this premise is seen to be false, most of the other interesting claims in the book, and all the hypercomputational ones, immediately collapse.

5-0 out of 5 stars Hypercomputation in the limits of classical physical reality
A computer is an artifact. Through specific control mechanisms of electric currents it was possible to domesticate natural phenomena, and put them at men service, giving rise to the levels of automation that characterize theworld in the turning of the millennium. But a computer is an analogartifact. Paul Cull, from Oregon State University, states thiscomputational anecdote in the following terms: «That analog devices behavedigitally is the basis for a large part of electronics engineering andallows for the construction of electronic computers. It is part of theengineering folklore that when the gain is high enough any circuit from alarge class will eventually settle into one of two states, which can beused to represent booleans 0 and 1. As far as we can tell, this theorem andits proof has never been published, but it probably appears in a nowunobtainable MIT technical report of the1950s.» Recently much work havebeen done to show that digital computers are a particular class of analogcomputers that exhibit greater computational power. In fact, digitalcomputers are extreme (weak) analog computers. A book was needed tointroduce these ideas to the graduate student on Theoretical ComputerScience and to the general researcher on the new field of Non-standardModels of Computation. Hava Siegelmann's book partially fills this gap inthe computational literature.

Over the last decade, researchers havespeculated that although the Turing model is indeed able to simulate alarge class of computations, it does not necessarily provide a completepicture of the computations possible in nature. As pointed out by HavaSiegelmann, the most famous proposals of new models were made by RichardFeynman and Roger Penrose. Feynman suggested making use of the non-localityof quantum physics. Penrose, who was motivated by the model of the humanbrain, argued that the Turing model of computing is not strong enough tomodel biological intelligence. In response, several novel models ofcomputation have been put forth: among them the quantum Turing machine andthe DNA computer. These models compute faster than Turing machines and thusare richer under time constraints. However they cannot computenon-recursive functions, and in this sense are not inherently more powerfulthan the classical model. The analog recurrent neural network model of HavaSiegemann computes more than the Turing machine, not only undertime-constraints, but also in general. In this sense it can be referred toas a hypercomputation model.

The use of analog recurrent neural networksfor computability analysis is due to Hava Siegelmann and Eduardo Sontag. InHava Siegelmann's book, she used them to establish lower bounds on theircomputational power. These systems satisfy the classical constraints ofcomputation theory, namely, (a) input is discrete (binary) and finite, (b)output is discrete (binary) and finite, and (c) the system is itself finite(control is finite). The infiniteness may originate from two differentsources: the system is influenced by a real value, which can be a physicalconstant, directly affecting the computation, a probability of a biasedbinary random coin or any other process; the infiniteness may also comefrom the operations of an adaptive process interleaved with the computationprocess, like is the case in our brains. Neurons may hold values within[0,1] with unbounded precision. To work with such analog systems, binaryinput is encoded into a rational number between 0 and 1, and the rationaloutput is decoded into an output binary sequence. The technique used inthis book consists of an encoding of binary words into the Cantor Set ofbase 4. Within this (number-theoretic) model, finite binary words areencoded as rational numbers in [0,1]. We may then identify the set ofcomputable functions by analog recurrent neural nets, provided that thetype of the weights is given. This research program has been systematicallypursued by Hava Siegelmann at the Technion and her collaborators.

Thefirst level of nets is NET[integers]. These nets are historically relatedwith the work of Warren McCulloch and Walter Pitts. As the weights areinteger numbers, each processor can only compute a linear combination ofinteger coefficients applied to zeros and ones. The activation values arethus always zero or one. In this case the nets 'degenerate' into classicaldevices called finite automata. It was Kleene who first proved thatMcCulloch and Pitts nets are equivalent to finite automata and thereforethey were able to recognize all regular languages. But they are not capableof recognizing well-formed parenthetic expressions or to recognize thenucleic acids for these structures are not regular...

The second relevantclass Hava Siegelmann considers is NET[rationals]. Rationals are indeedcomputable numbers in finite time, and NET[rationals] turn to be equivalentto Turing machines. Twofold equivalent: rational nets compute the samefunctions as Turing machines and, under appropriate encoding of input andoutput, they are able to compute the same functions in exactly the sametime. Even knowing that rationals are provided for free in nature,rationals of increasing complexity, this ressource do not even speed upcomputations with regard to Turing machines. The class NET[rationals]coincide with the class of (partial) recursive functions of Kurt Gödel andKleene. About them it is said that they constitute the whole concrete,realizable, mathematical universe.

The third relevant (and maybesurprising to the reader) class is NET[reals]. Reals are indeed in generalnon computable. But theories of physics abound that consider realvariables. If the reader look at these theories from a more epistemologicalpoint of view as approximative models, then we argue that while somealternative theories are not available, if the old models can encodehypercomputations, then they are not simulable in digital computers. Theadvantage of making a theory of computation on top of these systems is thatnonuniform classes of computation, namely the classes that arise incomplexity theory using Turing machines with advice, are uniformlydescribed in NET[reals]. As shown in Hava Siegelmann's book all sets overfinite alphabets can be represented as reals that encode the families ofboolean circuits that recognize them. Under efficient time computation,these networks compute not only all efficient computations by Turingmachines but also some non-recursive functions such as (a unary encodingof) the halting problem of Turing machines.

A novel connection betweenthe complexity of the networks in terms of information theory and theircomputational complexity is developed, spanning a hierarchy of computationfrom the Turing to the fully analog model. This leads to the statement ofthe Siegelmann-Sontag thesis of 'hypercomputation by analog systems'analogously to the Church-Turing thesis of 'computation by digitalsystems'.

A beautiful non-standard theory of computation is presented in'Neural Networks and Analog Computation'. I strongly recommend the carefulreading of Hava Siegelmann's book, to enjoy the uniformity of netsdescription and to ponder where hypercomputation begins in the limits ofclassical physical reality. ... Read more


53. Optimization Techniques, Volume 2 (Neural Network Systems Techniques and Applications)
by Cornelius T. Leondes
Hardcover: 398 Pages (1997-11-14)
list price: US$141.00 -- used & new: US$64.95
(price subject to change: see help)
Asin: 0124438628
Average Customer Review: 4.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Optimization Techniques is a unique reference source to a diverse array of methods for achieving optimization, and includes both systems structures and computational methods. The text devotes broad coverage toa unified view of optimal learning, orthogonal transformation techniques, sequential constructive techniques, fast back propagation algorithms, techniques for neural networks with nonstationary or dynamic outputs, applications to constraint satisfaction,optimization issues and techniques for unsupervised learning neural networks, optimum Cerebellar Model of Articulation Controller systems, a new statistical theory of optimum neural learning, and the role of the Radial Basis Function in nonlinear dynamical systems.This volume is useful for practitioners, researchers, and students in industrial, manufacturing, mechanical, electrical, and computer engineering.

Key Features
* Provides in-depth treatment of theoretical contributions to optimal learning for neural network systems
* Offers a comprehensive treatment of orthogonal transformation techniques for the optimization of neural network systems
* Includes illustrative examples and comprehensive treatment of sequential constructive techniques for optimization of neural network systems
* Presents a uniquely comprehensive treatment of the highly effective fast back propagation algorithms for the optimization of neural network systems
* Treats, in detail, optimization techniques for neural network systems with nonstationary or dynamic inputs
* Covers optimization techniques and applications of neural network systems in constraint satisfaction ... Read more

Customer Reviews (1)

4-0 out of 5 stars A Reference Series for those who create & optimize NN's.
Neural networks have seen an explosion in interest and application in the last 10 to 15 years, when they evolved from research on artificial intelligence One can find books on diverse subjects such as finance, medicine, and any physical or theoretical science with significant sections devoted to the use of neural networks in that discipline. I conducted a non-scientific survey (in 1 minute or less) of the importance of the subject matter by asking Amazon.com how many books it listed on subjects I thought might be equal in timing and importance. The results (below) imply a significant interest in neural networks, from readers and authors alike. My list does not report on the number of books that contain chapters or significant sections on neural networks.

·Neural Networks=1021 books listed; DNA=948 books; Enzymes=779 books, Genome=232 books, and Human Genome=100 books

Optimization Techniques is the second in a seven (7) volume series from Academic Press on neural network systems techniques and applications. The series presents itself as the first all-inclusive treatment of the subject matter and is aimed at a wide array of potential readers: researchers, students and practitioners in industrial, mechanical, electrical, manufacturing and computer engineering. As such, one would expect the series to be appealing to a more select audience of research workers focused on creating and improving neural networks, and not so much to those of us who use the applications and interpret the output. This seems to be the case.

This Volume in the series, claiming to be the first comprehensive treatment of optimization techniques including system structure and computational methods, presents the work of nineteen (19) contributors as a synthesis of what is known about neural networks and optimization techniques at the present time. The book is divided into ten (10) sections, each addressing different topic areas. I would not suspect that more than one or two sections would be of interest to the reader in an applied research field.

I found the sections on the learning of nonstationary processes and neural techniques for data analysis to be informative and well written. I did not anticipate having a warm feeling of confidence in my level of understanding the first time I read these sections. I am confident, however, that I know which direction current and future research will take on neural networks. ... Read more


54. Neural Networks
by Steve Ellacott, Deb Bose
 Hardcover: 387 Pages (1996-01-15)
list price: US$49.95
Isbn: 1850322449
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Neural networks provide a powerful approach to problems of machine learning and pattern recognition. the underlying mathematics, however, has much more in common with classical applied mathematics. This book introduces teh deterministic aspects of the mathematical theory in a comprehensive way. ... Read more


55. Algorithmic Decision Theory: First International Conference, ADT 2009, Venice, Italy, October 2009, Proceedings (Lecture Notes in Computer Science / Lecture Notes in Artificial Intelligence)
Paperback: 460 Pages (2010-01-08)
list price: US$95.00 -- used & new: US$85.86
(price subject to change: see help)
Asin: 3642044271
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

This volume contains the papers presented at ADT 2009, the first International Conference on Algorithmic Decision Theory. The conference was held in San Servolo, a small island of the Venice lagoon, during October 20-23, 2009. The program of the conference included oral presentations, posters, invited talks, and tutorials.

The conference received 65 submissions of which 39 papers were accepted (9 papers were posters). The topics of these papers range from computational social choice preference modeling, from uncertainty to preference learning, from multi-criteria decision making to game theory.

... Read more

56. Dependability of Engineering Systems: Modeling and Evaluation
by Jovan M. Nahman
Hardcover: 192 Pages (2001-12-18)
list price: US$109.00 -- used & new: US$64.94
(price subject to change: see help)
Asin: 3540414371
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
The book offers a sound, easily readable theoretical back- ground for dependability prediction and analysis of enginee- ring systems. The book bridges the gap between the real life dependability problems and very sophisticated and highly specialized books in this field. It is addressed to a broad readership including practicing engineers, reliability ana- lysts and postgraduate students of engineering faculties. The professionals in the field may also find some new mate- rial that is not covered in available textbooks such as fuz- zy logic evaluation of dependability performance, uncertain- ty assessment, open loop sequential analysis of discrete state stochastic processes, approximate solving of Markov systems. ... Read more


57. Scalable Uncertainty Management: Third International Conference, SUM 2009, Washington, DC, USA, September 28-30, 2009, Proceedings (Lecture Notes in Computer ... / Lecture Notes in Artificial Intelligence)
Paperback: 309 Pages (2009-10-07)
list price: US$83.00 -- used & new: US$67.25
(price subject to change: see help)
Asin: 3642043879
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

This volume contains the papers presented at the Third International Conference on Scalable Uncertainty Management, SUM 2009, in Washington, DC, September 28-30, 2009. It contains 21 technical papers which were selected out of 30 submitted papers in a rigourous reviewing process. The volume also contains extended abstracts of two invited talks.

... Read more

58. Theory of Randomized Search Heuristics: Foundations and Recent Developments (Series on Theoretical Computer Science)
Hardcover: 360 Pages (2010-09-30)
list price: US$98.00 -- used & new: US$98.00
(price subject to change: see help)
Asin: 9814282669
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Randomized search heuristics such as evolutionary algorithms, genetic algorithms, evolution strategies, ant colony and particle swarm optimization turn out to be highly successful for optimization in practice. The theory of randomized search heuristics, which has been growing rapidly in the last five years, also attempts to explain the success of the methods in practical applications. This book covers both classical results and the most recent theoretical developments in the field of randomized search heuristics such as runtime analysis, drift analysis and convergence. Each chapter of this book provides an overview of a particular domain and gives insights into the proofs and proof techniques of more specialized areas. Open problems still remain widely in randomized search heuristics - being a relatively young and vast field. These problems and directions for future research are addressed and discussed in this book. The book will be an essential source of reference for experts in the domain of randomized search heuristics and also for researchers who are involved or ready to embark in this field.As an advanced textbook, graduate students will benefit from the comprehensive coverage of topics. ... Read more


  Back | 41-58 of 58

Prices listed on this site are subject to change without notice.
Questions on ordering or shipping? click here for help.

site stats