The MPI Standard A Progress Report
更新时间:2023-04-15 11:12:01 阅读量: 实用文档 文档下载
- the推荐度:
- 相关推荐
The MPI Standard:
A Progress Report
A.J.G.Hey
The Origins of MPI
Over the past ten years,the message-passing paradigm has been shown to be both a practical and ef?cient way of programming Distributed Memory(DM)MIMD par-allel systems.However,the take-up of such machines by the scienti?c community,even when there were manifest hardware cost performance bene?ts,has been painfully slow.One of the reasons for this slow acceptance of par-allel machines has been concerns about software devel-opment cost and the lack of portability of the resulting parallel software.
The viability of DM parallel computers was?rst demon-strated by the CalTech Cosmic Cube project of Geoffrey Fox and Chuck Seitz in the mid1980’s.The CalTech group developed the‘domain decomposition’method-ology and a set of message-passing libraries to facilitate distributed memory programming.Each processor com-putes with its own local subset of the entire problem do-main but if data required for calculation on one node is held by another node,the two nodes pass data by message-passing.This set of frequently used message-passing patterns became‘Express’—a commercial prod-uct marketed by a spin-off company founded by two members of the CalTech group.However,although the Express system was available on several parallel plat-forms,by now Intel were supporting their own version of message-passing,NX/2,and nCUBE their VERTEX system,and so on.Although these systems all did much the same thing,they were all(sometimes subtly)incom-patible.
In Europe,the Esprit Genesis project which began in 1988,had evolved from being largely a hardware project based around designing a successor to the?rst genera-tion SUPRENUM architecture to Genesis-S—a project focussed on the development of truly portable parallel software.The chosen vehicle for the portability layer was the PARMACS message-passing macros.These macros were originally developed at Argonne National Labora-tory by Rusty Lusk and co-workers and later developed and extended for Fortran by Rolf Hempel at the GMD in Germany.
By1991,the Genesis project succeeded in demonstrat-ing portability of signi?cant end-user application codes across a large number of different vendor hardware plat-forms.The project also developed the?rst portable DM parallel benchmark suite.By Easter1992,there were a number of competing public domain portable message-passing systems on the market.Besides PAR-MACS,which was now supported commercially by Pal-las,a spin-off from the ill-fated SUPRENUM project, there were now systems such as PVM and PICL becom-ing popular in the US.Although PVM was popular for distributed heterogeneous computing,there were few, if any,optimized implementations available for genuine parallel systems.Moreover,PARMACS,although gain-ing in popularity in Europe,was hardly used in the US. In order to explore the possibility of standardising the message-passing software layer Ken Kennedy and Ge-offrey Fox convened a workshop in Williamsburg,Vir-ginia in Easter1992.A week later,the RAPS parallel benchmarking consortium held a workshop at the GMD to decide on a message-passing standard for their work. Vaidy Sunderam,originator of PVM talked at the meet-ing in Sankt Augustin and David Walker gave a report of the Williamsburg workshop.Nevertheless,the RAPS consortium felt that the only viable portable system they could adopt at that time was PARMACS.The stage was set for a classic Europe/US split.In order to avoid such a pision between Europe and the US,discussions took place in the summer of1992at the IBM Workshop in Lech in Austria.The result was a draft proposal for a standard message-passing interface called‘MPI1’by the authors of the report,Jack Dongarra,Rolf Hempel,Tony Hey,and David Walker1.
At the SuperComputing Conference SC92in Minneapo-lis,a“birds-of-a-feather”session was called to discuss the new MPI proposal.After a lively discussion,the group agreed to follow the procedures introduced by Ken Kennedy for the HPF Forum.The agreed goal was to produce a complete agreed draft of an MPI standard by SuperComputing93in Portland,Oregon next year.In order to achieve what,for a standards body,would be an incredibly fast time-scale,it was agreed that a working group would meet in Dallas for two days every six weeks for the?rst nine months of1993.Apart from agreed vot-ing rules and so on,a key factor in the success of this ‘informal’standards body was the active participation of the whole parallel computing community—users,im-plementors,researchers and vendors.For example,the hardware vendors who played a full part in the process were Convex,Cray,IBM,Intel,Meiko,NEC,nCUBE and TMC.Thanks to the successor to the Genesis project,the Esprit PPPE project on Portable Parallel Programming Environments,it was possible for there to be signi?cant European input to the MPI Forum.Regular attendees from Europe with full voting rights came from the GMD, Southampton and Meiko from PPPE,together with Lyn-don Clark from EPCC in Edinburgh.
What is MPI?
A key design principle of MPI was to incorporate the most useful features of existing message-passing systems rather than adopt any one pre-existing system.The re-sulting standard has strong input from IBM’s EUI,In-tel’s NX/2,Express,nCUBE’s VERTEX,and PARMACS together with p4,its successor from the PARMACS team at ANL.There was also valuable input from PVM and PICL,together with more experimental systems such as Zipcode,Chimp and Chameleon.
MPI is an explicit message-passing interface for applica-tion programs on parallel systems.Although message-passing is perhaps more natural on Distributed Memory parallel systems,because much of the problem in gain-ing ef?ciency on parallel computers comes down to data placement,message-passing is also an ef?cient way to
program Shared Memory parallel systems.MPI contains the best features of existing message-passing libraries plus some genuinely new features not found in any ex-isting system.In particular,the new‘communicator’ab-straction will allow the development and support of safe, modular,portable parallel software libraries.
The details of MPI are well explained in a recent book by three of the developers of the standard,Bill Gropp,Rusty Lusk and Tony Skjellum2.Here,we will only outline some of the new features compared to earlier message-passing systems.In such systems a typical‘send’opera-tion has the form:
SEND(address,length,
destination,tag) where:
address=memory location of buffer
with data to be sent length=length of message in bytes destination=process id of process to
which message sent tag=arbitrary integer to allow
programmer to restrict
receipt of messages
The parameters are a frequently chosen set that are a good compromise between the hardware capabilities and the programmer’s software needs.At the receiving pro-cess,it is left up to the system software to provide suf?-cient queueing capability so that the processor can hold other messages until it receives a message with the cor-rect tag.A typical’receive’operation has the form: RECEIVE(address,maxlen,
source,tag,actlen) where:
address,
maxlen=describe buffer to
receive data
source=output indicating where
message came from actlen=actual number of bytes
received
The MPI interface lifts some of the restrictions imposed by such a choice of parameters.The major new features are the following:
Message Buffers.
The parameter set(address,length)is generalized to(ad-dress,count,datatype).This allows more ef?cient han-dling of non-contiguous data and is also more suitable for heterogeneous computing in which,for example,the length in bytes of a?oating-point number may differ on different machines.Moreover,in MPI users can construct their own datatypes.
Groups and Communicators.
MPI allows the user to assign processes to‘process groups’.Within a group,each process is allocated a unique rank from0to(n-1)for an n-process group.With the earlier message-passing systems,the use of tags and wild-card matching could cause problems of interference between user code and third-party scienti?c library code. MPI extends the notion of tag by the idea of‘context’. Contexts are allocated at run-time by the system software and are used by the system to match messages.In addi-tion,MPI retains message tags with wild-card matching. The context and group are combined in MPI into a single object called a‘communicator’.Communicators allow for the creation of separate message-passing universes with a guarantee of no interference between them.They are used as arguments in most point-to-point and collective communication calls.
Send and Receive Routines.
MPI provides for a very rich variety of different types of send and receive routines.Some of these options are the result of a clear historical legacy arising from pro-grammers demanding the ability to do the same(un-safe but fast)type of send/receive as on they used to be able to do on their favourite parallel machine.MPI de-?nes both‘blocking’and‘non-blocking’operations and four communication modes,‘standard’,‘synchronous’,‘ready’and‘buffered’.The terminology is quite complex and the intending users should take care that the MPI use of terms like blocking and non-blocking accords with their expectations.The use of synchronous mode will be less ef?cient than other modes but will ensure that the send does not complete until the receive has been initi-ated.
Collective Communications.
As in the older message-passing libraries,MPI also pro-vides a set of routines that perform coordinated commu-nication amongst a group of processes.These concern data movement—broadcast,scatter,gather,gather-to-all,all-to-all—and global computation—reduce and parallel pre?x.A barrier synchronization operation is also provided.
The technical development of MPI was organized into a number of subgroups:Point-to-Point Communications, Collective Communications,Groups,Contexts and Com-municators,Process Topologies,Language Bindings,En-vironmental Management and Pro?ling.In this short overview it is impossible to do justice to the full power of MPI and to describe all its novel features.As can be seen from this list above,MPI also supports communi-cation patterns for common problem topologies and has an interface that allows pro?ling tools to be attached in a straightforward manner.
Is MPI large or small?At?rst sight MPI may appear somewhat daunting since a count reveals that it contains 129different possible function calls.However,this is misleading since the number of key ideas in MPI is actu-ally very 784285155f0e7cd184253653eful parallel programs can be written using a minimal set of just six functions:
MPI_INIT
MPI_COMM_SIZE
MPI_COMM_RANK
MPI_SEND
MPI_RECV
MPI_FINALIZE
The other functions in the MPI standard add?exibility (datatypes),robustness(non-blocking send/receive),ef-?ciency(ready mode),modularity(groups,communica-tors)and convenience(collective operations,topologies). The original MPI standard3was published in1994and an online version incorporating some minor corrections is available at the web-site:784285155f0e7cd184253653/mpi/mpi-report/mpi-report
MPI—The Future
There are now several good public domain implemen-tations of MPI now 784285155f0e7cd184253653rmation about these releases and othe MPI news can be found at the URL 784285155f0e7cd184253653/mpi/.Perhaps more importantly, there are now several vendor implementations now available—for the IBM SP2,SGI PowerChallenge and Convex Exemplar systems.Implementations also exist for the Cray T3D and the Intel Paragon although these are not yet available as vendor supported products. Despite this undoubted success,MPI still has a number of obvious shortcomings.Most notably absent from the present version of MPI is any mention of parallel I/O and dynamic process management.For this reason an ‘MPI-2’Forum is now under way and will be reporting on progress at SuperComputing95in San Diego in De-cember.The MPI-2initiative is not seeking to radically change the present standard but hopes to improve the applicability and usability of MPI by addressing some important issues that were deliberately omitted from the original standard in order to ensure that the Forum met its deadlines.The new initiative is well supported by the vendors-BBN,Condor,Convex,Cray,Hitachi,Hughes, IBM,Intel,Meiko,nCUBE,NEC,Pallas,SGI and TMC-with only a few notable exceptions.Since there is already an initiative addressing the issue of parallel I/O involv-ing NASA Ames,Livermore and IBM,the Forum will defer consideration of I/O matters until this project has made its recommendations.The major issues being con-sidered by the MPI-2Forum therefore concern dynamic process management and’one-sided communications’. One-sided communications are to do with with remote put and get operations without intervention of one of the processors.Such operations will be useful in producing ef?cient HPF systems that compile down to MPI. Although MPI was designed by a committee,it has proved to be a popular and robust standard that is gath-ering wide user and vendor acceptance.Although orig-inally designed for message-passing on DM parallel sys-tems,it is now acknowledged that message-passing with MPI can also provide an ef?cient and portable way of programming SM systems.We conclude that MPI is alive and well.References
1’MPI1—A Proposal for a Message-Passing Interface Standard’, J.Dongarra,R.Hempel,A.J.G.Hey and David Walker,ORNL Report, 1992
2Using MPI:Portable Parallel Programming with the Message-Passing Inter-face,W.Gropp,E.Lusk and A.Skjellum,MIT Press1994.
3’MPI:A Message-Passing Standard’,International Journal of Supercom-puter Applications and High Performance Computing,Vol.8,No.314,1994.
正在阅读:
The MPI Standard A Progress Report04-15
最新(演讲稿)之高二升高三励志演讲稿03-14
新课标高考数学模拟试题文科数学(含答案)05-24
挤压工艺及模具测试题(一)05-24
对付宝宝咳嗽、发烧小办法07-19
二级报名时间02-08
最敬佩的一个人作文500字06-22
希伯来书课程讲义0310-01
六爻预测全象解析+++++05-21
- 1Biomolecular computing systems principles, progress and pote
- 2BUSINESS REPORT范文
- 3英语morning report
- 4SAP Localization Report
- 5ABAQUS_Standard_63_中文
- 6ABAQUS_Standard_63_中文
- 7Progress Embedded SQL-89 Guide and Reference
- 8Boundary Layer Turbulence Index Progress and Recent Developments
- 9COOP PLACEMENT GUIDELINE AND REPORT
- 10Recent progress in relaxor ferroelectrics withperovskite structure
- 教学能力大赛决赛获奖-教学实施报告-(完整图文版)
- 互联网+数据中心行业分析报告
- 2017上海杨浦区高三一模数学试题及答案
- 招商部差旅接待管理制度(4-25)
- 学生游玩安全注意事项
- 学生信息管理系统(文档模板供参考)
- 叉车门架有限元分析及系统设计
- 2014帮助残疾人志愿者服务情况记录
- 叶绿体中色素的提取和分离实验
- 中国食物成分表2020年最新权威完整改进版
- 推动国土资源领域生态文明建设
- 给水管道冲洗和消毒记录
- 计算机软件专业自我评价
- 高中数学必修1-5知识点归纳
- 2018-2022年中国第五代移动通信技术(5G)产业深度分析及发展前景研究报告发展趋势(目录)
- 生产车间巡查制度
- 2018版中国光热发电行业深度研究报告目录
- (通用)2019年中考数学总复习 第一章 第四节 数的开方与二次根式课件
- 2017_2018学年高中语文第二单元第4课说数课件粤教版
- 上市新药Lumateperone(卢美哌隆)合成检索总结报告
- Progress
- Standard
- Report
- MPI
- 天津市人民政府批转市体改委等六部门关于解决我市供销合作社经费
- 2022优化方案高考总复习·地理(鲁教版)第五部分选修6第46讲模拟
- 个人整理中国经济史课后答案(经济学综合第二本)
- 一年级语文教研组教学工作总结范文
- 融创凡尔赛_渝北新中式装修风格案例;融创凡尔赛装修案例
- SBR工艺设计说明文书
- 延安干部学院学习总结
- 2022-2022学年高中数学 第三章 三角恒等变换 3.1.1 两角差的余弦
- 赫兹伯格的双因素激励理论
- 学生会学习部新学期工作计划通用范本
- 信息系统安全等级保护定级报告示例(精编文档).doc
- 2022年昆明医科大学第五附属医院306西医综合之生物化学考研仿真
- 砖厂规章制度范本
- E级GPS控制测量技术设计书模板
- 模拟电子试题及答案
- 力博得iya声波电动牙刷YY试用报告
- 教师学习论坛演讲比赛一等奖演讲稿:生命不息 学习不止
- 人教版(PEP)小学英语五年级下册 Unit 1 My day Part C同步测试(
- 【邱薏霖】有意义的秋游
- 刑法诉讼谈“一房两卖”的产权归属