普通员工服务成绩考核表 title_17785_第1页
普通员工服务成绩考核表 title_17785_第2页
普通员工服务成绩考核表 title_17785_第3页
普通员工服务成绩考核表 title_17785_第4页
普通员工服务成绩考核表 title_17785_第5页
已阅读5页,还剩164页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Rajkumar BuyyaSchool of Computer Science and Software EngineeringMonash TechnologyMelbourne, AustraliaEmail: URL: .au/rajkumar,Concurrent Programming with Threads,Objectives,Explain the parallel computing right from architecture, OS, programming paradigm, and applicationsExplain the multithreading paradigm, and all aspects of how to use it in an applicationCover all basic MT conceptsExplore issues related to MTContrast Solaris, POSIX, Java threadsLook at the APIs in detailExamine some Solaris, POSIX, and Java code examplesDebate on: MPP and Cluster Computing,Agenda,Overview of Computing Operating Systems Issues Threads Basics Multithreading with Solaris and POSIX threads Multithreading in Java Distributed Computing Grand Challenges Solaris, POSIX, and Java example code,Hardware,Operating System,Applications,Computing Elements,Programming paradigms,Architectures Compilers Applications P.S.Es Architectures Compilers Applications P.S.Es,SequentialEra,ParallelEra,1940 50 60 70 80 90 2000 2030,Two Eras of Computing,History of Parallel Processing,PP can be traced to a tablet dated around 100 BC.Tablet has 3 calculating positions.Infer that multiple positions:Reliability/ Speed,Motivating Factors,Just as we learned to fly, not by constructing a machine that flaps its wings like birds, but by applying aerodynamics principles demonstrated by nature. We modeled PP after those of biological species.,Aggregated speed with which complex calculations carried out by individual neurons response is slow (ms) - demonstrate feasibility of PP,Motivating Factors,Why Parallel Processing?,Computation requirements are ever increasing - visualization, distributed databases, simulations, scientific prediction (earthquake), etc.Sequential architectures reaching physical limitation (speed of light, thermodynamics),Technical Computing,Solving technology problems usingcomputer modeling, simulation and analysis,Life Sciences,Mechanical Design & Analysis (CAD/CAM),Aerospace,No. of Processors,C.P.I.,1 2 . . . .,Computational Power Improvement,Multiprocessor,Uniprocessor,Age,Growth,5 10 15 20 25 30 35 40 45 . . . .,Computational Power Improvement,Vertical,Horizontal,The Tech. of PP is mature and can be exploited commercially; significant R & D work on development of tools & environment.Significant development in Networking technology is paving a way for heterogeneous computing.,Why Parallel Processing?,Hardware improvements like Pipelining, Superscalar, etc., are non-scalable and requires sophisticated Compiler Technology.Vector Processing works well for certain kind of problems.,Why Parallel Processing?,Parallel Program has & needs .,Multiple “processes” active simultaneously solving a given problem, general multiple processors.Communication and synchronization of its processes (forms the core of parallel programming efforts).,Processing Elements Architecture,Simple classification by Flynn: (No. of instruction and data streams)SISD - conventionalSIMD - data parallel, vector computingMISD - systolic arraysMIMD - very general, multiple approaches.Current focus is on MIMD model, using general purpose processors. (No shared memory),Processing Elements,SISD : A Conventional Computer,Speed is limited by the rate at which computer can transfer information internally.,Ex:PC, Macintosh, Workstations,The MISDArchitecture,More of an intellectual exercise than a practical configuration. Few built, but commercially not available,SIMD Architecture,Ex: CRAY machine vector processing, Thinking machine cm*,Ci no of CPUs,P1,P2,P3,time,CPU,Multithreading - Multiprocessors,Concurrency Vs Parallelism,P1,P2,P3,time,No of execution process = no of CPUs,CPU,CPU,CPU,Computational Model,Parallel Execution due to :Concurrency of threads on Virtual ProcessorsConcurrency of threads on Physical ProcessorTrue Parallelism :threads : processor map = 1:1,User-Level Schedule (User),Kernel-Level Schedule (Kernel),General Architecture ofThread Model,Hides the details of machine architectureMaps User Threads to kernel threadsProcess VM is shared, state change in VM by one thread visible to other.,Process Parallelism,int add (int a, int b, int & result)/ function stuffint sub(int a, int b, int & result)/ function stuff,pthread t1, t2;pthread-create(,MISD and MIMD Processing,abr1cdr2,add,sub,Processor,Data,IS1,IS2,Processor,do“dn/2dn2/+1“dn,Sort,Data,IS,Data Parallelism,sort( int *array, int count)/./.,pthread-t, thread1, thread2;“pthread-create(,SIMD Processing,Sort,Processor,Processor,Purpose,Threads Model,Process Model,Start execution of a new thread,Creation of a new thread,Wait for completion of thread,Exit and destroy the thread,thr_join(),wait( ),exec( ),exit( ),fork ( ), thr_create() builds the new thread and starts the execution,thr_create( ),thr_exit(),Process and Threaded models,Code Comparison,Segment (Process)main ( )fork ( );fork ( );fork ( );,Segment(Thread)main()thread_create(0,0,func(),0,0);thread_create(0,0,func(),0,0);thread_create(0,0,func(),0,0);,Printing Thread,Editing Thread,Independent Threads,printing()- - - - - - - - - - - -editing()- - - - - - - - - - - -main()- - - - - - - - - - - -id1 = thread_create(printing);id2 = thread_create(editing);thread_run(id1, id2);- - - - - - - - - - - -,Cooperative threads - File Copy,reader()- - - - - - - - - -lock(buffi);read(src,buffi);unlock(buffi);- - - - - - - - - -,writer()- - - - - - - - - -lock(buffi);write(src,buffi);unlock(buffi);- - - - - - - - - -,buff0,buff1,Cooperative Parallel Synchronized Threads,RPC Call,func()/* Body */,RPC(func),.,Client,Server,Network,ServerThreads,Message PassingFacility,Server Process,Client Process,Client Process,User Mode,Kernel Mode,Multithreaded Server,Compiler Thread,Preprocessor Thread,Multithreaded Compiler,Thread Programming models,1. The boss/worker model2. The peer model3. A thread pipeline,taskX,taskY,taskZ,main ( ),Workers,Program,Files,Resources,Databases,Disks,SpecialDevices,Boss,Input (Stream),The boss/worker model,Example,main() /* the boss */ forever get a request;switch( request )case X: pthread_create(.,taskX);case X: pthread_create(.,taskX);.taskX() /* worker */ perform the task, sync if accessing shared resourcestaskY() /* worker */ perform the task, sync if accessing shared resources.-Above runtime overhead of creating thread can be solved by thread pool* the boss thread creates all worker thread at program initialization and each worker thread suspends itself immediately for a wakeup call from boss,The peer model,taskX,taskY,Workers,Program,Input(static),Example,main() pthread_create(.,thread1.task1); pthread_create(.,thread2.task2);. signal all workers to start wait for all workers to finish do any cleanuptask1() /* worker */wait for start perform the task, sync if accessing shared resourcestask2() /* worker */wait for start perform the task, sync if accessing shared resources,A thread pipeline,Resources,Stage 1,Stage 2,Stage 3,Program,Filter Threads,Input (Stream),Example,main() pthread_create(.,stage1);pthread_create(.,stage2);.wait for all pipeline threads to finishdo any cleanupstage1() get next input for the programdo stage 1 processing of the inputpass result to next thread in pipelinestage2()get input from previous thread in pipelinedo stage 2 processing of the inputpass result to next thread in pipelinestageN()get input from previous thread in pipelinedo stage N processing of the inputpass result to program output.,Multithreaded Matrix Multiply.,X,A,=,B,C,C1,1 = A1,1*B1,1+A1,2*B2,1.Cm,n=sum of product of corresponding elements in row of A and column of B.,Each resultant element can be computed independently.,Multithreaded Matrix Multiply,typedef struct int id; int size;int row, column;matrix *MA, *MB, *MC; matrix_work_order_t;main() int size = ARRAY_SIZE, row, column;matrix_t MA, MB,MC;matrix_work_order *work_orderp;pthread_t peersize*zize;./*process matrix, by row, column */for( row = 0; row size; row+ ) for( column = 0; column size; column+) id = column + row * ARRAY_SIZE; work_orderp = malloc( sizeof(matrix_work_order_t);/* initialize all members if wirk_orderp */pthread_create(peerid, NULL, peer_mult, work_orderp); /* wait for all peers to exist*/ for( i =0; i size*size;i+)pthread_join( peeri, NULL );,Multithreaded Server.,void main( int argc, char *argv ) int server_socket, client_socket, clilen; struct sockaddr_in serv_addr, cli_addr; int one, port_id;#ifdef _POSIX_THREADSpthread_t service_thr;#endif port_id = 4000;/* default port_id */if( (server_socket = socket( AF_INET, SOCK_STREAM, 0 ) 0 ) IDENTI|FY USER REQUEST .Do NECESSARY Processing .Send Results to Server CLOSE Connect and Terminate THREAD close( client_socket );#ifdef POSIX_THREADS pthread_exit( (void *)0);#endif,The Value of MT,Program structureParallelismThroughputResponsivenessSystem resource usageDistributed objectsSingle source across platforms (POSIX)Single binary for any number of CPUs,To thread or not to thread,Improve efficiency on uniprocessor systemsUse multiprocessor HardwareImprove ThroughputSimple to implement Asynchronous I/OLeverage special features of the OS,To thread or not to thread,If all operations are CPU intensive do not go far on multithreadingThread creation is very cheap, it is not freethread that has only five lines of code would not be useful,DOS - The Minimal OS,UserSpaceKernelSpaceDOSData,Stack & Stack Pointer,Program Counter,UserCodeGlobalDataDOSCode,Hardware,DOS,Multitasking OSs,ProcessUserSpaceKernelSpace,Hardware,UNIX,Process Structure,(UNIX, VMS, MVS, NT, OS/2 etc.),Multitasking Systems,Hardware,The Kernel,P1,P2,P3,P4,Processes,(Each process is completely independent),Multithreaded Process,UserCodeGlobalData,The Kernel,Process Structure,(Kernel state and address space are shared),T1s SP T3sPC T1sPC T2sPC,T1s SP,T2s SP,Kernel Structures,Process IDUID GID EUID EGID CWD.,File Descriptors,Signal Dispatch Table,Memory Map,Traditional UNIX Process Structure,Solaris 2 Process Structure,LWP 2,LWP 1,Scheduling Design Options,M:1HP-UNIX,1:1DEC, NT, OS/1, AIX. IRIX,M:M,2-level,SunOS Two-Level Thread Model,Proc 1,Proc 2,Proc 3,Proc 4,Proc 5,Traditionalprocess,User,LWPs,Kernelthreads,Kernel,Hardware,Processors,Thread Life Cycle,main()main() . pthread_create( func, arg); thr_create( .func.,arg.); . . void * func() .,pthread_exit(),T2,T1,pthread_create(.func.),POSIX,Solaris,Waiting for a Thread to Exit,main()main() . pthread_join(T2); thr_join( T2, . . void * func() .,pthread_exit(),T2,T1,pthread_join(),POSIX,Solaris,Scheduling States: Simplified View of Thread State Transitions,RUNNABLE,SLEEPING,STOPPED,ACTIVE,Stop,Continue,Preempt,Stop,Stop,Sleep,Wakeup,Preemption,The process of rudely interrupting a thread and forcing it to relinquish its LWP (or CPU) to another.CPU2 cannot change CPU3s registers directly. It can only issue a hardware interrupt to CPU3. It is up to CPU3s interrupt handler to look at CPU2s request and decide what to do.Higher priority threads always preempt lower priority threads.Preemption ! = Time slicingAll of the libraries are preemptive,EXIT Vs. THREAD_EXIT,The normal C function exit() always causes the process to exit. That means all of the process - All the threads.The thread exit functions:UI: thr_exit()POSIX: pthread_exit()OS/2: DosExitThread() and _endthread()NT: ExitThread() and endthread()all cause only the calling thread to exit, leaving the process intact and all of the other threads running. (If no other threads are running, then exit() will be called.),Cancellation,Cancellation is the means by which a thread can tell another thread that it should exit.main()main()main().pthread_cancel (T1);DosKillThread(T1);TerminateThread(T1)There is no special relation between the killer of a thread and the victim. (UI threads must “roll their own” using signals),(pthread exit),(pthread cancel(),T1,T2,POSIX,OS/2,Windows NT,Cancellation State and Type,StatePTHREAD_CANCEL_DISABLE (Cannot be cancelled)PTHREAD_CAN

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论