Geometry.Net - the online learning center
Home  - Computer - Parallel Computing

e99.com Bookstore
  
Images 
Newsgroups
Page 2     21-40 of 170    Back | 1  | 2  | 3  | 4  | 5  | 6  | 7  | 8  | 9  | Next 20

         Parallel Computing:     more books (100)
  1. Handbook of Parallel Computing and Statistics (Statistics:A Series of Textbooks and Monographs)
  2. High Performance Heterogeneous Computing (Wiley Series on Parallel and Distributed Computing) by Jack Dongarra, Alexey L. Lastovetsky, 2009-08-03
  3. High Performance Computing in Science and Engineering 2000: Transactions of the High Performance Computing Center Stuttgart (HLRS) 2000
  4. High Performance Cluster Computing: Architectures and Systems, Vol. 1 by Rajkumar Buyya, 1999-05-31
  5. High-Performance Computing : Paradigm and Infrastructure by Laurence T. Yang, Minyi Guo, 2005-08-12
  6. Parallel Computing for Bioinformatics and Computational Biology: Models, Enabling Technologies, and Case Studies (Wiley Series on Parallel and Distributed Computing)
  7. Parallel Processing for Scientific Computing (Software, Environments and Tools) by Padma Raghavan, and Horst D. Simon Edited by Michael A. Heroux, 2006-11-01
  8. Performance Evaluation of Parallel And Distributed Systems (Distributed, Cluster and Grid Computing) (v. I)
  9. Cloud Computing Principles and Paradigms (Wiley Series on Parallel and Distributed Computing) by Rajkumar Buyya, James Broberg, et all 2011-04-04
  10. Distributed Computing: Fundamentals, Simulations, and Advanced Topics (Wiley Series on Parallel and Distributed Computing) by Hagit Attiya, Jennifer Welch, 2004-03-25
  11. Parallel Computing Works! by Geoffrey C. Fox, Roy D. Williams, et all 1994-05-15
  12. Parallel Computing on Heterogeneous Clusters by Alexey L. Lastovetsky, 2003-08-11
  13. Mobile Intelligence (Wiley Series on Parallel and Distributed Computing) by Laurence T. Yang, 2010-02-08
  14. Smart Environments: Technology, Protocols and Applications (Wiley Series on Parallel and Distributed Computing) by Diane Cook, Sajal Das, 2004-11-02

21. ScienceDirect - Parallel Computing - List Of Issues
Similar pages www.sciencedirect.com/webeditions/journal/01678191 Similar pages More results from www.sciencedirect.com www.elsevier.nl/locate/parco Similar pages www.elsevier.nl/inca/publications/store/5/0/5/6/1/7/ Similar pages More results from www.elsevier.nl European Centre for parallel computing at ViennaVCPC is an HPC centre in Austria which provides parallel computing resourcesto academia and industry, including national and international projects.
http://www.sciencedirect.com/science/journal/01678191
This Feature requires JavaScript
Register
or Login: Password: Athens Login
Parallel Computing Bookmark this page as: http://www.sciencedirect.com/science/journal/01678191
= subscribed = non-subscribed = complimentary
Articles in Press
Volume 29 Volume 29, Issue 4 , Pages 373-551 (April 2003)
Parallel computing in numerical optimization Volume 29, Issue 3 , Pages 285-371 (March 2003) Volume 29, Issue 2 , Pages 167-283 (February 2003) Volume 29, Issue 1 , Pages 1-166 (January 2003) Volume 28 Volume 27 Volume 26 Volume 25 ... Volume 21
Alert me when new Journal Issues are available
Add this journal to My Favorite Journals
Sample Issue Online
Journal Publisher Website Information for Authors
Send feedback to ScienceDirect
Your use of this service is governed by Terms and Conditions . Please review our for details on how we protect information that you supply.

22. Upcoming Compiler And Parallel Computing Conferences
Upcoming Compiler and parallel computing Conferences. ParCo2003 ParallelComputing Conference Dresden, Germany; 9/2/03 cfp 3/15/03
http://www.cs.rice.edu/~roth/conferences.html
Upcoming Compiler and Parallel Computing Conferences
Here is a list of upcoming conferences of interest to researchers in the areas of compilers, parallel processing, and supercomputing. Each entry has the following format:
Conference title
Place and starting date
Submission deadline
This list is in two parts: conferences accepting submissions and conferences closed to submissions . After a conference is held its entry is moved to a list of past conferences . You can use your browser's FIND capability to search for a conference acronym. Last updated: March 17, 2003
Conferences open to submissions (sorted by submission deadline) OOPSLA 2003: 18th ACM SIGPLAN Conf on Object-Oriented Programming, Systems, Languages, and Applications
Anaheim, California; 10/26/03
cfp: 3/21/03
FPL 2003:
13th Int'l Conf on Field Programmable Logic and Applications
Lisbon, Portugal; 9/1/03
cfp: 3/14/03 (elapsed) extended to 3/21/03 ICFP 2003: 8th ACM SIGPLAN Int'l Conf on Functional Programming Uppsala, Sweden; 8/25/03 (in conjunction with PLI'03 cfp: 3/29/03 PPDP 2003: 5th ACM SIGPLAN Int'l Conf on Principles and Practice of Declarative Programming Uppsala, Sweden; 8/25/03 (in conjunction with

23. Centre For Parallel Computing
Centre for parallel computing.
http://www.cpc.wmin.ac.uk/
The Cavendish School of Computing Science 's Centre for Parallel Computing is a focus for research in the technology and applications of parallel and distributed computations. Activities in the Centre include the development of tools and environments to support the parallel software engineering life-cycle; parallel implementation models for declarative programming systems; and parallel discrete event simulation.
Applications involving collaboration with other research groups and industry include the areas of telecommunications; performance engineering; molecular modelling; computation mechanics; computational grid; and control systems. The Centre is well supported by industry, EPSRC , the EU and other funding bodies.
The Director of the Centre is Professor Stephen Winter
Centre for Parallel Computing
University of Westminster,
Cavendish of Computer Science,
115 New Cavendish Street,
London W1W 6UW
Telephone 0207 911 5000 Edited by Loretta Pletti. Comments, suggestions... click here

24. Parallel Computing Works
A book about parallel computing, focusing on a few specific research projects done at Caltech.Category Computers parallel computing Documentation......next contents index Next Contents. parallel computing Works. What is Containedin parallel computing Works? We briefly describe the contents of this book.
http://www.npac.syr.edu/copywrite/pcw/
Next: Contents
Parallel Computing Works
This book describes work done at the Caltech Concurrent Computation Program, Pasadena, California. This project ended in 1990 but the work has been updated in key areas until early 1994. The book also contains links to some current projects. ISBN 1-55860-253-4 Morgan Kaufmann Publishers , Inc. 1994 ordering information
What is Contained in Parallel Computing Works?
We briefly describe the contents of this book
Applications
The heart of this work is a set of applications largely developed at Caltech from 1985-1990 by the Caltech Concurrent Computation Group. These are linked to a set of tables and Glossaries. Applications are classified into 5 problem classes:
Synchronous Applications
Such applications tend to be regular and characterised by algorithms employing simultaneous identical updates to a set of points; more applications can be found in Chapters

25. Particle Applications - Pipeline Computing
An overview of how multiple particle systems can be simulated using parallel computing.Category Science Physics Computational...... Such structures are not naturally represented in the static arrays that are currentlyused for synchronous data parallel computing and are the subject of
http://www.npac.syr.edu/EDUCATION/PUB/hpfe/module5/
Module 5. Particle Applications - Pipeline Computing
Many questions in science can be answered by viewing a physical system as a collection of particles that obey certain laws. A familiar example is that the universe can be viewed as a collection of astronomical bodies which obey Newton's laws of gravitation. The laws in the above examples can be written as equations that are typical of the class of equations known as Ordinary Differential Equations (ODEs). These equations have well-known solution techniques which can be easily expressed in a data parallel way. In the case of particle systems, solving the equations also involves calculating a function involving interactions between all pairs of particles. We will discuss a variety of ways to approach such "all pairs" calculations in a data parallel way, as well as discuss such issues as deciding which parts of a calculation to parallelize and how to achieve load balancing in a program.
5.1 Particle applications
The application that we will discuss is the universe of astronomical particles under Newton's laws of motion, commonly known as the N-body problem. We suppose that our system consists of N particles, each with a mass, moving with some velocity through 3-dimensional space. Part of Newton's system is the recognition that mass and velocity of the particles are what affect the system; in particular, we can disregard the diameters and shapes of the particles and treat them as point masses Velocity is, of course, defined to be the change in position over time, and acceleration to be the change in velocity over time.

26. Particle Applications - Pipeline Computing
An overview of how multiple particle systems can be simulated using parallel computing.
http://www.npac.syr.edu/EDUCATION/PUB/hpfe/module5/index.html
Module 5. Particle Applications - Pipeline Computing
Many questions in science can be answered by viewing a physical system as a collection of particles that obey certain laws. A familiar example is that the universe can be viewed as a collection of astronomical bodies which obey Newton's laws of gravitation. The laws in the above examples can be written as equations that are typical of the class of equations known as Ordinary Differential Equations (ODEs). These equations have well-known solution techniques which can be easily expressed in a data parallel way. In the case of particle systems, solving the equations also involves calculating a function involving interactions between all pairs of particles. We will discuss a variety of ways to approach such "all pairs" calculations in a data parallel way, as well as discuss such issues as deciding which parts of a calculation to parallelize and how to achieve load balancing in a program.
5.1 Particle applications
The application that we will discuss is the universe of astronomical particles under Newton's laws of motion, commonly known as the N-body problem. We suppose that our system consists of N particles, each with a mass, moving with some velocity through 3-dimensional space. Part of Newton's system is the recognition that mass and velocity of the particles are what affect the system; in particular, we can disregard the diameters and shapes of the particles and treat them as point masses Velocity is, of course, defined to be the change in position over time, and acceleration to be the change in velocity over time.

27. The History Of The Development Of Parallel Computing
The History of the Development of parallel computing. =Gregory V. Wilson gvw@cs
http://ei.cs.vt.edu/~history/Parallel.html
The History of the Development of Parallel Computing
==================================================== Gregory V. Wilson gvw@cs.toronto.edu From the crooked timber of humanity No straight thing was ever made ====================================================
[1] IBM introduces the 704. Principal architect is Gene Amdahl; it is the first commercial machine with floating-point hardware, and is capable of approximately 5 kFLOPS.
[2] IBM starts 7030 project (known as STRETCH) to produce supercomputer for Los Alamos National Laboratory (LANL). Its goal is to produce a machine with 100 times the performance of any available at the time. [3] LARC (Livermore Automatic Research Computer) project begins to design supercomputer for Lawrence Livermore National Laboratory (LLNL). [4] Atlas project begins in the U.K. as joint venture between University of Manchester and Ferranti Ltd. Principal architect is Tom Kilburn.
[5] Digital Equipment Corporation (DEC) founded.
[6] Control Data Corporation (CDC) founded.

28. SAL- Parallel Computing - Programming Languages & Systems
aCe a dataparallel computing environment designed to improve the adaptabilityof algorithms. Communication Libraries SAL Home parallel computing
http://sal.kachinatech.com/C/1/
Most parallel programming languages are conventional or sequential programming languages with some parallel extensions. A compiler is a program that converts the source code written in a specific language into another format, eventually in assembly or machine code that a computer understands. For message-passing based distributed memory systems, "compilers" often map communication functions into prebuilt routines in communication libraries. Some systems listed here are basically communication libraries. However, they have their own integrated utilities and programming environments. Search SAL: Commercial, Shareware, GPL aCe a data-parallel computing environment designed to improve the adaptability of algorithms.
ADAPTOR
a High Performance Fortran compilation system.
Arjuna
an object-oriented programming system for distributed applications.
Charm/Charm++
machine independent parallel programming system.
Cilk
an algorithmic multithreaded language.
Clean
a higher order, pure and lazy functional programming language.
CODE
visual parallel programming system.

29. Self-Managed, Guaranteed Distributed Computing By DataSynapse - Homepage
Commercial enterprise that purchases idle PC capacity and resells it to users with complex parallel computing tasks.
http://www.datasynapse.com/
DataSynapse Partners with Calypso Technology
"The Calypso-DataSynapse joint solution will permit calculations within Calypso to be performed faster by distributing them over a network of computers via LiveCluster"
"Showing that it's no fly-by-night operation, DataSynapse has secured a second major partnership - with Calypso."
Abbey National Treasury Services Signs with DataSynapse

"We've seen a 95 percent performance improvement. Some valuation processing has gone from ten minutes down to 30 seconds"

DataSynapse Written Feature in Financial Technology Magazine

"Grids for financial services firms are a reality and are becoming a competitive advantage for many firms. Companies are rapidly implementing distributed computing infrastructures to support their complex, mission-critical applications."
...
DataSynapse was the only Grid-distributed computing company to be included.
DataSynapse Named One of New York's Top Ten Technology Companies PARTNERSHIPS
IBM Launches Commercial Grid Offerings; DataSynapse Play Key Role

"The benefits of Grid computing for e-business on demand are here; it's now and it's real", Tom Hawk, IBM's general manager of Grid Computing Worldwide.
The DataSynapse - Calypso Partnership
Learn more about the DataSynapse/Calypso partnership and what it can mean for you.

30. Moving ...
Paderborn Center for parallel computing Error Object not found! Welcometo the PC 2 web services. Please notice that our web server
http://www.uni-paderborn.de/pcpc/pcpc.html
Paderborn Center for parallel Computing
Error: Object not found!
Welcome to the PC web services. Please notice that our web server has moved to a new location.
The new URL is: http://www.uni-paderborn.de/pc2/
Please update your links. In case you got here by an invalid reference in our new web tree, we would like to apologize and ask you to write us an e-mail
Contact: PC -Webmaster
Last Modified:

31. Computer Science Departement - Bordeaux 1 University
Computer Science Department. Research areas include combinatorics, algorithmics, logic, automata, parallel computing, symbolic programming, and graphics.
http://dept-info.labri.u-bordeaux.fr/ANGLAIS/
Université Bordeaux I
UFR Mathématiques et Informatique
Computer Science Department
The Department
Programs

32. Parallel Computing Research
Newsletter of the Center for Research on Parallel Computation at RiceCategory Computers parallel computing Documentation......CRPC Home page. Index of parallel computing Research. 1 9 9 9 - Volume 7, Issue1 - Spring/Summer 1999. 1 9 9 8. About parallel computing Research. CRPC Home Page.
http://www.crpc.rice.edu/CRPC/newsletters/

33. New South Wales Centre For Parallel Computing
up Up New South Wales Centre for parallel computing. * If you arelooking for the www.hpc.unsw.edu.au webserver, please look here.
http://server.srcpc.unsw.edu.au/
Up: New South Wales Centre for Parallel Computing
* If you are looking for the www.hpc.unsw.edu.au webserver, please look here Welcome to the NSWCPC Web server. The NSWCPC is a consortium of 5 New South Wales universities, and operates a Thinking Machines CM5 , and a Silicon Graphics Power Challenge . The role of the NSWCPC has now been subsumed by the Australian Centre for Advanced Computing and Communications (ac3). Academic users of ac3 facilities should consult the ac3 (Academic) website as the primary source of information on the facility.
Postscript Files
Many of the documents served by this site are compressed postscript. You will need to have the gzip utility to uncompress the document, and you may find ghostscript/ghostview a useful previewer. This software is freely available for most platforms on the Internet. If you Web browser has been set up correctly, it should automatically uncompress the document and spawn the previewer once you click on the hotlink. Mac and PC users: The latest version of ghostview will uncompress and display gzipped postscript files in one step. Please read the instructions on the ghostscript site for how to install and setup the previewer on your browser. For Unix users, you will need to make sure your mailcap file has the following line: application/postscript; ghostview %s

34. Parallel Computing With Linux
Article about choosing hardware and configuring a Beowulf.Category Computers parallel computing Beowulf Documentation...... parallel computing With Linux. 2 at NASA's Goddard Space Flight Centerextendsthe utility of Linux to the realm of high performance parallel computing.
http://www.acm.org/crossroads/xrds6-1/parallel.html
ACM Crossroads Student Magazine
The ACM's First Electronic Publication Crossroads Home
Join the ACM!

Search Crossroads

crossroads@acm.org
... Crossroads
Parallel Computing With Linux
By Forrest Hoffman and William Hargrove Linuxis just now making a significant impact on the computing industry, but it has been a powerful tool for computer scientists and computational scientists for a number of years. Aside from the obvious benefits of working with a freely-available, reliable, and efficient open source operating system [ ], the advent of Beowulf-style cluster computingpioneered by Donald Becker, Thomas Sterling, et al. [ ]. If a computational problem can be solved in a loosely-coupled distributed memory environment, a Beowulf clusteror Pile of PCs (POP)may be the answer; and it "weighs in" at a price point traditional parallel computer manufacturers cannot touch. Figure 1: The Stone SouperComputer at Oak Ridge National Laboratory. We became involved in cluster computing more than two years ago, after developing a proposal for the construction of a Beowulf cluster to support a handful of research projects. The proposal was rejected, but because we had already begun development of a new high-resolution landscape ecology application, we decided to build a c lusterout of surplus PCs (primarily Intel 486s) destined for salvage. We began intercepting excess machines at federal facilities in Oak Ridge, Tennessee, and processing them into usable nodes. By September 1997, we had a functional parallel computer system built out of no-cost hardware. Today we have a constantly-evolving 126 node highly heterogeneous Beowulf-style cluster, called the Stone SouperComputer (see

35. Paderborn Center For Parallel Computing

http://www.upb.de/pc2/

36. Ada95
In context of parallel computing, brief mention of Fortran90 and High Performance Fortran (HPF). Short document.
http://fedelma.astro.univie.ac.at/web/ada_parallel.html
High Performance Scientific Computing with Ada95
object-oriented and parallel
Why Ada95?
Ada95 is the first standardised object oriented programming language (since 15-Feb-1995, ISO/IEC 8652:1995). It provides powerful data abstraction mechanisms, hierarchical libraries, inheritance and polymorphism. Generic packages and the possibility of extensions to types and packages facilitate software reuse.
What about
Just a few months from the next millenium, the latest Fortran standard does not even offer the features of Ada83 (ANSI/MIL-STD 1815A). There is no real exception handling, there are no generics, no OOP features, no tasking, no child libraries, and typing is as soft as ever...
But there is High Performance Fortran?!
Indeed, there is HPF with all the drawbacks of the Fortran language. HPF is data parallel and provides predefined subprograms which allow the programmer to distribute large datasets over several processors and to manipulate parts of these datasets in parallel. Much of all this is done by the compiler but you still have to carefully identify those parts of the program which you can parallelise in this way; usually special parallel statements are scattered throughout the codes. You have virtually no control over the details.
Now there is Ada95!

37. Parallel Computing Toolkit: Product Information
parallel computing Toolkit, Unleash the Power of parallel computing.parallel computing Toolkit brings parallel computation to anyone
http://www.wolfram.com/products/applications/parallel/
Overview Features Who's It For? Mathematica and RSH ... Parallel Computing Toolkit
Unleash the Power of Parallel Computing
Parallel Computing Toolkit brings parallel computation to anyone having access to more than one computer on a network or anyone working on multiprocessor machines.
It implements many parallel programming primitives and includes high-level commands for parallel execution of operations such as animation, plotting, and matrix manipulation. Also supported are many popular new programming approaches such as parallel Monte Carlo simulation, visualization, searching, and optimization. The implementations for all high-level commands in Parallel Computing Toolkit are provided in Mathematica source form, so they can serve as templates for building additional parallel programs.
Licensing Information On a network, Parallel Computing Toolkit needs a licensed Mathematica kernel for each process. On non-networked multiprocessor Macintosh and Windows computers, the Mathematica license agreement allows users of a single-machine license to run multiple kernels. Parallel Computing Toolkit is available for all platforms that run Mathematica 3 or 4. These include Windows 95/98/Me/NT/2000 (PC), Mac OS (Power Macintosh), Linux (PC, Alpha, PowerPC), Solaris, HP-UX, IRIX, AIX, Digital Unix/Compaq Tru64 Unix, and compatible systems. Creating active connections requires

38. Parallel Computing Toolkit For Mathematica: Inexpensive Computing Solution With
parallel computing Toolkit Provides Inexpensive Computing Solutionwith High Functionality. February 7, 2000With the release of
http://www.wolfram.com/news/pct.html
Parallel Computing Toolkit Provides Inexpensive Computing Solution with High Functionality
February 7, 2000With the release of Parallel Computing Toolkit , Wolfram Research officially introduces parallel computing support for Mathematica Parallel Computing Toolkit for Mathematica makes parallel programming easily affordable to users with access to either a multiprocessor machine or a network of heterogeneous machineswithout requiring dedicated parallel hardware. Parallel Computing Toolkit can take advantage of existing Mathematica kernels on all supported operating systemsincluding Unix, Linux, Windows, and Macintoshconnected through TCP/IP, thus enabling users to use existing hardware and Mathematica licenses to create low-cost "virtual parallel computers." Parallel Computing Toolkit supports all common parallel programming paradigms such as virtual shared or distributed memory, automatic or explicit scheduling, and concurrency including synchronization, locking, and latency hiding. Other features of Parallel Computing Toolkit include machine-independent implementation, parallel functional programming, and failure recovery and automatic reassignment of stranded processes in the event of a system failure.

39. The Center For Advanced Computing @ The University Of Michigan
(May 15). We have a New Name The Center for parallel computing is nowthe Center for Advanced Computing. In merging with the Laboratory
http://cac.engin.umich.edu/
the center for advanced computing
college of engineering @ university of michigan
AMD Clusters

IBM SPs

Intel Cluster

Itanium SMP
...
SGI Origin

The Center for Advanced Computing (CAC) delivers high performance computing, grid infrastructure, very large data storage, and advanced visualization services through the College of Engineering and throughout the University of Michigan. Teamed with Michigan's MGRID, and as a primary resource partner of NPACI, our outreach extends both near and far.
News A change to AMD cluster morpheus' queues.
Morpheus no longer supports jobs that run more than 24 hours. All jobs that run longer than 24 hours must move to the hypnos cluster. (March 5) The CAC has a new staff member.
Chris Messina has recently joined our team. Chris is a HPC system administrator, and will support all of the CAC's resources. (February 4) Our SGI Origin's system software has been updated. Among the changes:
  • Irix OS updated to 6.5.18m C/C++/Fortran77/90 compilers and libraries updated to SGI's Nov 2002 release. AFS client upgraded to 3.6 Patch 6.
  • 40. Is Parallel Computing Dead?
    Article about the future of the parallel computing industry.
    http://www.crpc.rice.edu/CRPC/newsletters/oct94/director.html

    Page 2     21-40 of 170    Back | 1  | 2  | 3  | 4  | 5  | 6  | 7  | 8  | 9  | Next 20

    free hit counter