SlideShare a Scribd company logo
1 of 33
Events Timing and Profiling Perhaad Mistry & Dana Schaa, Northeastern University Computer Architecture Research Lab, with Benedict R. Gaster, AMD © 2011
Instructor Notes Discusses synchronization, timing and profiling in OpenCL Coarse grain synchronization covered which discusses synchronizing on a command queue granularity Discuss different types of command queues (in order and out of order)  Fine grained synchronization which covers synchronization at a per function call granularity using OpenCL events Improved event handling capabilities like user defined events have been added in the OpenCL 1.1 specification Synchronization across multiple function calls and scheduling actions on the command queue discussed using wait lists The main applications of OpenCL events have been discussed here including timing, profiling and using events for managing asynchronous IO with a device Potential performance benefits with asynchronous IO explained using a asymptotic calculation and a real world medical imaging application 2 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Topics OpenCL command queues Events and synchronization OpenCL 1.1 and event callbacks Using OpenCL events for timing and profiling Using OpenCL events for asynchronous host-device communication with image reconstruction example 3 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Command Queues We need to measure the performance of an application as a whole and not just our optimized kernels to understand bottlenecks This necessitates understanding of OpenCL synchronization techniques and events  Command queues are used to submit work to a device Two main types of command queues In Order Queue Out of Order Queue 4 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
In-Order Execution In an in-order command queue, each command  executes after the previous one has finished For the set of commands shown, the read from the device would start after the kernel call has finished  Memory transactions have consistent view Commands To Submit clEnqueueWriteBuffer (queue , d_ip, CL_TRUE, 0, mem_size, (void *)ip, 0, NULL,  NULL) clEnqueueNDRangeKernel (queue, kernel0, 2,0, global_work_size,   local_work_size, 0,NULL,NULL)  clEnqueueReadBuffer( context, d_op,  CL_TRUE, 0, mem_size,  (void *) op, NULL, NULL,NULL); Time Mem clEnqWrite EnqNDRange clEnqRead Device 0 Command Queue 5 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Out-of-Order Execution In an out-of-order command queue, commands are executed as soon as possible, without any ordering guarantees All memory operations occur in single memory pool Out-of-order queues result in memory transactions that will overlap and clobber data without some form of synchronization The commands discussed in the previous slide could execute in any order on device EnqWrite Time EnqRead EnqNDRange Mem Device 0 Command Queue 6 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Synchronization in OpenCL Synchronization is required if we use an out-of-order command queue or multiple command queues  Coarse synchronization granularity Per command queue basis Finer synchronization granularity  Per OpenCL operation basis using events Synchronization in OpenCL is restricted to within a context This is similar to the fact that it is not possible to share data between multiple contexts without explicit copying The proceeding discussion of synchronization is applicable to any OpenCL device (CPU or GPU) 7 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
OpenCL Command Queue Control Command queue synchronization methods work on a per-queue basis Flush:clFlush(cl_commandqueue) Send all commands in the queue to the compute device No guarantee that they will be complete when clFlush returns  Finish:clFinish(cl_commandqueue)  Waits for all commands in the command queue to complete before proceeding (host blocks on this call) Barrier: clEnqueueBarrier(cl_commandqueue) Enqueue a synchronization point that ensures all prior commands in a queue have completed before any further commands execute 8 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Synchronization for clEnqueue Functions Functions like clEnqueueReadBuffer and clEnqueueWriteBuffer have a boolean parameter to determine if the function is blocking This provides a blocking construct that can be invoked to block the host If blocking is TRUE, OpenCL enqueues the operation using the host pointer in the command-queue Host pointer can be reused by the application after the enqueue call returns If blocking is FALSE, OpenCL will use the host pointer parameter to perform a non-blocking read/write and returns immediately Host pointer cannot be reused safely by the application after the call returns Event handle returned by clEnqueue* operations can be used to check if the non-blocking operation has completed 9 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
OpenCL Events Previous OpenCL synchronization functions only operated on a per-command-queue granularity OpenCL events are needed to synchronize at a function granularity Explicit synchronization is required for  Out-of-order command queues Multiple command queues OpenCL events are data-types defined by the specification for storing timing information returned by the device  10 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
OpenCL Events Profiling of OpenCL programs using events has to be enabled explicitly when creating a command queue CL_QUEUE_PROFILING_ENABLE flag must be set   Keeping track of events may slow down execution A handle to store event information can be passed for all clEnqueue* commands When commands such as clEnqueueNDRangeKernel and clEnqueueReadBuffer are invoked timing information is recorded at the passed address 11 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Uses of OpenCL Events Using OpenCL Events we can: time execution of clEnqueue* calls like kernel execution or explicit data transfers use the events from OpenCL to schedule asynchronous data transfers between host and device profile an application to understand an execution flow  observe overhead and time consumed by a kernel in the command queue versus actually executing Note: OpenCL event handling can be done in a consistent manner on both CPU and GPU for AMD and NVIDIA’s implementations 12 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Capturing Event Information cl_int clGetEventProfilingInfo( 		cl_event  event,				//event object cl_profiling_infoparam_name, 	//Type of data of event  		size_t  param_value_size,		//size of memory pointed to by param_value		void *  param_value,		        	//Pointer to returned timestamp  		size_t * param_value_size_ret) 	 //size of data copied to param_value clGetEventProfilingInfoallows us to query cl_event to get required counter values Timing information returned as cl_ulongdata types Returns device time counter in nanoseconds 13 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Event Profiling Information cl_int clGetEventProfilingInfo( 		cl_event  event,				//event object cl_profiling_infoparam_name,	//Type of data of event  		size_t  param_value_size,		//size of memory pointed to by param_value 		void *  param_value,		        	//Pointer to returned timestamp  		size_t * param_value_size_ret) 	//size of data copied to param_value Table shows event types described using cl_profiling_infoenumerated type 14 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Capturing Event Information cl_int clGetEventInfo (	 	cl_event event,				//event object cl_event_infoparam_name,		//Specifies the information to query.  	void * param_value,			//Pointer to memory where result queried is returned  	size_t * param_value_size_ret)	//size in bytes of memory pointed to by param_value clGetEventInfocan be used to return information about the event object It can return details about the command queue, context, type of command associated with events, execution status This command can be used by along with timing provided by clGetEventProfilingInfo as part of a high level profiling framework to keep track of commands 15 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
User Events in OpenCL 1.1 OpenCL 1.1 defines a user event object. Unlike clEnqueue* commands, user events can be set by the user When we create a user event, status is set to CL_SUBMITTED clSetUserEventStatus is used to set the execution status of a user event object. The status needs to be set to CL_COMPLETE A user event can only be set to CL_COMPLETE once cl_event clCreateUserEvent(	 cl_contextcontext,		//OpenCL Context  			cl_int *errcode_ret)		//Returned Error Code cl_memclSetUserEventStatus(	 		cl_event event,			//User event  		cl_int execution_status)	//Execution Status 16 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Using User Events A simple example of user events being triggered and used in a command queue //Create  user event which will start the write of buf1 user_event = clCreateUserEvent(ctx, NULL); clEnqueueWriteBuffer( cq, buf1, CL_FALSE, ..., 1, &user_event , NULL); //The write of buf1 is now enqued and waiting on user_event X = foo(); //Lots of complicated host processing code clSetUserEventStatus(user_event, CL_COMPLETE); //The clEnqueueWriteBuffer to buf1 can now proceed as per OP of foo() 17 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Wait Lists All clEnqueue* methods also accept event wait lists Waitlists are arrays of cl_event type OpenCL defines waitlists to provide precedence rules Enqueue a list of events to wait for such that all events need to complete before this particular command can be executed Enqueue a command  to mark this location in the queue with a unique event object that can be used for synchronization  err = clWaitOnEvents(1, &read_event); clEnqueueWaitListForEvents( cl_command_queue , int, cl_event *) clEnqueueMarker( cl_command_queue, cl_event *) 18 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Example of Event CallBacks cl_int clSetEventCallback (	 	cl_event event,						  //Event Name  	cl_int  command_exec_type ,			  //Status on which callback is invoked  	void (CL_CALLBACK  *pfn_event_notify)  //Callback Name  	 (cl_event event, cl_int event_command_exec_status, void *user_data),  	void *  user_data)					  //User Data Passed to callback OpenCL 1.1 allows registration of a user callback function for a specific command execution status Event callbacks can be used to enqueue new commands based on event state changes in a non-blocking manner Using blocking versions of  clEnqueue* OpenCL functions in callback leads to undefined behavior The callback takes an cl_event, status and a pointer to user data as its parameters 19 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
AMD Events Extension OpenCL event callbacks are valid only for the  CL_COMPLETE state The cl_amd_event_callback extension provides the ability to register event callbacks for states other than CL_COMPLETE Lecture 10 discusses how to use vendor specific extensions in OpenCL The event states allowed are cl_queued, cl_submitted, and cl_running 20 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Using Events for Timing OpenCL events can easily be used for timing durations of kernels.  This method is reliable for performance optimizations since it uses counters from the device  By taking differences of the start and end timestamps we are discounting overheads like time spent in the command queue clGetEventProfilingInfo( event_time,  CL_PROFILING_COMMAND_START,  sizeof(cl_ulong), &starttime, NULL); clGetEventProfilingInfo(event_time, CL_PROFILING_COMMAND_END,  sizeof(cl_ulong), &endtime, NULL); unsigned long elapsed =   (unsigned long)(endtime - starttime); 21 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Profiling Using Events OpenCL calls occur asynchronously within a heterogeneous application A  clFinishto capture events after each function introduces interference   Obtaining a pipeline view of commands in an OpenCL context Declare a large array of events in beginning of application Assign an event from within this array to each clEnqueue* call Query all events at one time after the critical path of the application  Application Function 1 Application Function 2 clEnqueueNDRangeKernel( kernel1, &event_list[i] ); clEnqueueNDRangeKernel( kernel2, &event_list [i+1] ); &event_list [i] &event_list [i+1] Event Logging Framework N = estimated no. of events in application cl_event event_list[N] Event logging framework can query and format data stored in event_list 22 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Profiling with Event Information Before getting timing information, we must make sure that the events we are interested in have completed There are different ways of waiting for events: clWaitForEvents(numEvents, eventlist) clFinish(commandQueue) Timer resolution can be obtained from the flag CL_DEVICE_PROFILING_TIMER_RESOLUTION when calling clGetDeviceInfo() 23 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Example of Profiling A heterogeneous application can have multiple kernels and a large amount of host device IO Questions that can be answered by profiling using OpenCL events We need to know which kernel to optimize when multiple kernels take similar time ? Small kernels that may be called multiple times vs. large slow complicated kernel ?	 Are the kernels spending too much time in queues ? Understand proportion between execution time and setup time for an application How much does host device IO matter ? By profiling an application with minimum overhead and no extra synchronization, most of the above questions can be answered 24 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Asynchronous IO Overlapping host-device I/O can lead to substantial application level performance improvements How much can we benefit from asynchronous IO Can prove to be a non-trivial coding effort, Is it worth it ? Useful for streaming workloads that can stall the GPU like medical imaging where new data is generated and processed in a continuous loop Other uses include workloads like linear algebra where the results of previous time steps can be transferred asynchronously to the host We need two command queues with a balanced amount of work on each queue 25 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Asynchronous I/O Asymptotic Approximation of benefit of asynchronous IO in OpenCL Tc = Time for computation Ti = Time to write I/P to device  To = Time to read OP for device (Assume host to device and device to host I/O is same) Case 1: 1 Queue Ti Tc To Ti Tc To Total Time = 2Tc + 2Ti + 2To Case 2: 2 Queues – One device for compute Ti Tc To Note: Idle time denotes time when no IO occurs. The compute units of the GPU are busy idle Ti Tc To Total Time for 2 kernels and their I/O  = 2Tc + Ti + To 26 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Asynchronous I/O Time with 1 Queue = 2Tc + 2Ti + 2To No asynchronous behavior Time with 2 Queues = 2Tc + Ti + To Overlap computation and communication Maximum benefit achievable with similar input and output data is approximately 30% of overlap when Tc = Ti = To since that would remove the idle time shown in the previous diagram Host-device I/O is limited by PCI bandwidth, so it's often not quite as big a win 27 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Dual DMA Engine Case In the Nvidia Fermi GPUs, dual DMA engines allow simultaneous bidirectional IO.  Possible Improvement with dual DMA Engines Baseline with one queue = 3*5 = 15T Overlap Case = 7T Potential Performance Benefit ~ 50% Case 3: One device for compute but dual DMA Engines Tc ~ Ti ~ To = T Dual DMA active  Ti Tc To Ti Tc To Ti Tc To Ti Tc To Ti Tc To Total Time = 7T 28 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Example: Image Reconstruction Filtered Back-projection Application  Multiple sinogram images processed to build a reconstructed image Images continuously fed in from scanner to iteratively improve resultant output Streaming style data flow Input Image Stack Image Source: hem.bredband.net/luciadbb/Reconstruction_presentation.pdf Reconstructed OP Image 29 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Without Asynchronous I/O Single Command Queue: N images = N( tcompute + transfer) Inefficient for medical imaging applications like reconstruction where large numbers of input images are used in a streaming fashion to incrementally reconstruct an image.  Performance improvement by asynchronous IO is better than previously discussed case since no IO from device to host after each kernel call. This would reduce total IO time per kernel by ½ Total time per image = Ti + Tc = 2T Overlapped Time  = T (if Ti = Tc) shows ~ 50% improvement scope ComputeKernel(Image0) ComputeKernel(Image1) Copy(Image1) Copy(Image1) 30 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Events for Asynchronous I/O Two command queues created on the same device Different from asymptotic analysis case of dividing computation between queues In this case we use different queues for IO and compute We have no output data moving from Host to device for each image, so using separate command queues will also allow for latency hiding Compute Queue ComputeKernel(Image0) ComputeKernel(Image1) ComputeKernel(Image2) I/O Queue Copy(Image1) Copy(Image2) Copy(Image0 31 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Backprojection The Backprojection kernel isn’t  relevant to this discussion We simply wish to show how kernels and buffer writes can be enqueued to different queues to overlap IO and computation More optimizations possible like buffering strategies and memory management to avoid declaring N buffers on device //N is the number of images we wish to use cl_event event_list[N]; //Start First Copy to device clEnqueueWriteBuffer (queue1 , d_ip[0], CL_FALSE,   			(void *)ip, 0, NULL, &event_list[0] ) ; for ( int i = 1 ;  i <N ; i++) {	//Wait till IO is finished  by specifying a wait list of one element which denotes the previous I/O 	clEnqueueNDRange(queue0, Kernel, … 			1,&event_list[i-1], NULL); //clEnqueueND will return asynchronously and begin IO 	//Enque a Non Blocking Write using other command queue 	clEnqueueWriteBuffer (queue1 		, d_ip [i-1], CL_FALSE,  	 	(void *)ip, 0, NULL, &event_list[i]) } 32 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
Summary OpenCL events allow us to use the execution model and synchronization to benefit application performance Use command queue synchronization constructs for coarse grained control Use events for fine grained control over an application OpenCL 1.1 allows more complicated event handling and adds callbacks. OpenCL 1.1 also provides for events that can be triggered by the user Specification provides events to understand an application performance.  Per kernel analysis requires vendor specific tools Nvidia OpenCL Profiler AMD Accelerated Parallel Processing (APP) Profiler and Kernel Analyzer 33 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011

More Related Content

What's hot

protothread and its usage in contiki OS
protothread and its usage in contiki OSprotothread and its usage in contiki OS
protothread and its usage in contiki OSSalah Amean
 
Efficient Dynamic Scheduling Algorithm for Real-Time MultiCore Systems
Efficient Dynamic Scheduling Algorithm for Real-Time MultiCore Systems Efficient Dynamic Scheduling Algorithm for Real-Time MultiCore Systems
Efficient Dynamic Scheduling Algorithm for Real-Time MultiCore Systems iosrjce
 
Putting a Fork in Fork (Linux Process and Memory Management)
Putting a Fork in Fork (Linux Process and Memory Management)Putting a Fork in Fork (Linux Process and Memory Management)
Putting a Fork in Fork (Linux Process and Memory Management)David Evans
 
Direct Code Execution @ CoNEXT 2013
Direct Code Execution @ CoNEXT 2013Direct Code Execution @ CoNEXT 2013
Direct Code Execution @ CoNEXT 2013Hajime Tazaki
 
Making a Process
Making a ProcessMaking a Process
Making a ProcessDavid Evans
 
Performance analysis of sobel edge filter on heterogeneous system using opencl
Performance analysis of sobel edge filter on heterogeneous system using openclPerformance analysis of sobel edge filter on heterogeneous system using opencl
Performance analysis of sobel edge filter on heterogeneous system using opencleSAT Publishing House
 
Virtual Memory (Making a Process)
Virtual Memory (Making a Process)Virtual Memory (Making a Process)
Virtual Memory (Making a Process)David Evans
 
Crossing into Kernel Space
Crossing into Kernel SpaceCrossing into Kernel Space
Crossing into Kernel SpaceDavid Evans
 
Kernelvm 201312-dlmopen
Kernelvm 201312-dlmopenKernelvm 201312-dlmopen
Kernelvm 201312-dlmopenHajime Tazaki
 
eBPF Perf Tools 2019
eBPF Perf Tools 2019eBPF Perf Tools 2019
eBPF Perf Tools 2019Brendan Gregg
 
Performance and predictability
Performance and predictabilityPerformance and predictability
Performance and predictabilityRichardWarburton
 
Paper id 71201933
Paper id 71201933Paper id 71201933
Paper id 71201933IJRAT
 
Scheduling in Linux and Web Servers
Scheduling in Linux and Web ServersScheduling in Linux and Web Servers
Scheduling in Linux and Web ServersDavid Evans
 
Inference accelerators
Inference acceleratorsInference accelerators
Inference acceleratorsDarshanG13
 
Multicore programmingandtpl(.net day)
Multicore programmingandtpl(.net day)Multicore programmingandtpl(.net day)
Multicore programmingandtpl(.net day)Yan Drugalya
 

What's hot (20)

protothread and its usage in contiki OS
protothread and its usage in contiki OSprotothread and its usage in contiki OS
protothread and its usage in contiki OS
 
Efficient Dynamic Scheduling Algorithm for Real-Time MultiCore Systems
Efficient Dynamic Scheduling Algorithm for Real-Time MultiCore Systems Efficient Dynamic Scheduling Algorithm for Real-Time MultiCore Systems
Efficient Dynamic Scheduling Algorithm for Real-Time MultiCore Systems
 
Putting a Fork in Fork (Linux Process and Memory Management)
Putting a Fork in Fork (Linux Process and Memory Management)Putting a Fork in Fork (Linux Process and Memory Management)
Putting a Fork in Fork (Linux Process and Memory Management)
 
Direct Code Execution @ CoNEXT 2013
Direct Code Execution @ CoNEXT 2013Direct Code Execution @ CoNEXT 2013
Direct Code Execution @ CoNEXT 2013
 
Making a Process
Making a ProcessMaking a Process
Making a Process
 
50120140506014
5012014050601450120140506014
50120140506014
 
Performance analysis of sobel edge filter on heterogeneous system using opencl
Performance analysis of sobel edge filter on heterogeneous system using openclPerformance analysis of sobel edge filter on heterogeneous system using opencl
Performance analysis of sobel edge filter on heterogeneous system using opencl
 
Virtual Memory (Making a Process)
Virtual Memory (Making a Process)Virtual Memory (Making a Process)
Virtual Memory (Making a Process)
 
Crossing into Kernel Space
Crossing into Kernel SpaceCrossing into Kernel Space
Crossing into Kernel Space
 
Kernelvm 201312-dlmopen
Kernelvm 201312-dlmopenKernelvm 201312-dlmopen
Kernelvm 201312-dlmopen
 
eBPF Perf Tools 2019
eBPF Perf Tools 2019eBPF Perf Tools 2019
eBPF Perf Tools 2019
 
Performance and predictability
Performance and predictabilityPerformance and predictability
Performance and predictability
 
Paper id 71201933
Paper id 71201933Paper id 71201933
Paper id 71201933
 
Task and Data Parallelism
Task and Data ParallelismTask and Data Parallelism
Task and Data Parallelism
 
Scheduling
SchedulingScheduling
Scheduling
 
Scheduling in Linux and Web Servers
Scheduling in Linux and Web ServersScheduling in Linux and Web Servers
Scheduling in Linux and Web Servers
 
Parallel computation
Parallel computationParallel computation
Parallel computation
 
Inference accelerators
Inference acceleratorsInference accelerators
Inference accelerators
 
Multicore programmingandtpl(.net day)
Multicore programmingandtpl(.net day)Multicore programmingandtpl(.net day)
Multicore programmingandtpl(.net day)
 
Multicore Intel Processors Performance Evaluation
Multicore Intel Processors Performance EvaluationMulticore Intel Processors Performance Evaluation
Multicore Intel Processors Performance Evaluation
 

Similar to Lec11 timing

Efficient Resource Allocation to Virtual Machine in Cloud Computing Using an ...
Efficient Resource Allocation to Virtual Machine in Cloud Computing Using an ...Efficient Resource Allocation to Virtual Machine in Cloud Computing Using an ...
Efficient Resource Allocation to Virtual Machine in Cloud Computing Using an ...ijceronline
 
A Practical Event Driven Model
A Practical Event Driven ModelA Practical Event Driven Model
A Practical Event Driven ModelXi Wu
 
Contiki introduction II-from what to how
Contiki introduction II-from what to howContiki introduction II-from what to how
Contiki introduction II-from what to howDingxin Xu
 
Manage all the things, small and big, with open source LwM2M implementations ...
Manage all the things, small and big, with open source LwM2M implementations ...Manage all the things, small and big, with open source LwM2M implementations ...
Manage all the things, small and big, with open source LwM2M implementations ...Benjamin Cabé
 
Behind modern concurrency primitives
Behind modern concurrency primitivesBehind modern concurrency primitives
Behind modern concurrency primitivesBartosz Sypytkowski
 
Kapacitor - Real Time Data Processing Engine
Kapacitor - Real Time Data Processing EngineKapacitor - Real Time Data Processing Engine
Kapacitor - Real Time Data Processing EnginePrashant Vats
 
An introduction to_rac_system_test_planning_methods
An introduction to_rac_system_test_planning_methodsAn introduction to_rac_system_test_planning_methods
An introduction to_rac_system_test_planning_methodsAjith Narayanan
 
Behind modern concurrency primitives
Behind modern concurrency primitivesBehind modern concurrency primitives
Behind modern concurrency primitivesBartosz Sypytkowski
 
LCA13: Common Clk Framework DVFS Roadmap
LCA13: Common Clk Framework DVFS RoadmapLCA13: Common Clk Framework DVFS Roadmap
LCA13: Common Clk Framework DVFS RoadmapLinaro
 
Anatomy of an action
Anatomy of an actionAnatomy of an action
Anatomy of an actionGordon Chung
 
Parallel and Distributed Computing Chapter 8
Parallel and Distributed Computing Chapter 8Parallel and Distributed Computing Chapter 8
Parallel and Distributed Computing Chapter 8AbdullahMunir32
 
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)Kevin Lynch
 
Phil Basford - machine learning at scale with aws sage maker
Phil Basford - machine learning at scale with aws sage makerPhil Basford - machine learning at scale with aws sage maker
Phil Basford - machine learning at scale with aws sage makerAWSCOMSUM
 

Similar to Lec11 timing (20)

OpenCL 2.2 Reference Guide
OpenCL 2.2 Reference GuideOpenCL 2.2 Reference Guide
OpenCL 2.2 Reference Guide
 
Efficient Resource Allocation to Virtual Machine in Cloud Computing Using an ...
Efficient Resource Allocation to Virtual Machine in Cloud Computing Using an ...Efficient Resource Allocation to Virtual Machine in Cloud Computing Using an ...
Efficient Resource Allocation to Virtual Machine in Cloud Computing Using an ...
 
A Practical Event Driven Model
A Practical Event Driven ModelA Practical Event Driven Model
A Practical Event Driven Model
 
Clone cloud
Clone cloudClone cloud
Clone cloud
 
Contiki introduction II-from what to how
Contiki introduction II-from what to howContiki introduction II-from what to how
Contiki introduction II-from what to how
 
Manage all the things, small and big, with open source LwM2M implementations ...
Manage all the things, small and big, with open source LwM2M implementations ...Manage all the things, small and big, with open source LwM2M implementations ...
Manage all the things, small and big, with open source LwM2M implementations ...
 
Behind modern concurrency primitives
Behind modern concurrency primitivesBehind modern concurrency primitives
Behind modern concurrency primitives
 
Linux Internals - Part II
Linux Internals - Part IILinux Internals - Part II
Linux Internals - Part II
 
Kapacitor - Real Time Data Processing Engine
Kapacitor - Real Time Data Processing EngineKapacitor - Real Time Data Processing Engine
Kapacitor - Real Time Data Processing Engine
 
An introduction to_rac_system_test_planning_methods
An introduction to_rac_system_test_planning_methodsAn introduction to_rac_system_test_planning_methods
An introduction to_rac_system_test_planning_methods
 
Sycl 1.2 Reference Card
Sycl 1.2 Reference CardSycl 1.2 Reference Card
Sycl 1.2 Reference Card
 
Behind modern concurrency primitives
Behind modern concurrency primitivesBehind modern concurrency primitives
Behind modern concurrency primitives
 
LCA13: Common Clk Framework DVFS Roadmap
LCA13: Common Clk Framework DVFS RoadmapLCA13: Common Clk Framework DVFS Roadmap
LCA13: Common Clk Framework DVFS Roadmap
 
OpenCL 2.1 Reference Guide
OpenCL 2.1 Reference GuideOpenCL 2.1 Reference Guide
OpenCL 2.1 Reference Guide
 
Microkernel Development
Microkernel DevelopmentMicrokernel Development
Microkernel Development
 
Anatomy of an action
Anatomy of an actionAnatomy of an action
Anatomy of an action
 
Parallel and Distributed Computing Chapter 8
Parallel and Distributed Computing Chapter 8Parallel and Distributed Computing Chapter 8
Parallel and Distributed Computing Chapter 8
 
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
 
fall2013
fall2013fall2013
fall2013
 
Phil Basford - machine learning at scale with aws sage maker
Phil Basford - machine learning at scale with aws sage makerPhil Basford - machine learning at scale with aws sage maker
Phil Basford - machine learning at scale with aws sage maker
 

Recently uploaded

TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxBkGupta21
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 

Recently uploaded (20)

TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptx
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 

Lec11 timing

  • 1. Events Timing and Profiling Perhaad Mistry & Dana Schaa, Northeastern University Computer Architecture Research Lab, with Benedict R. Gaster, AMD © 2011
  • 2. Instructor Notes Discusses synchronization, timing and profiling in OpenCL Coarse grain synchronization covered which discusses synchronizing on a command queue granularity Discuss different types of command queues (in order and out of order) Fine grained synchronization which covers synchronization at a per function call granularity using OpenCL events Improved event handling capabilities like user defined events have been added in the OpenCL 1.1 specification Synchronization across multiple function calls and scheduling actions on the command queue discussed using wait lists The main applications of OpenCL events have been discussed here including timing, profiling and using events for managing asynchronous IO with a device Potential performance benefits with asynchronous IO explained using a asymptotic calculation and a real world medical imaging application 2 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 3. Topics OpenCL command queues Events and synchronization OpenCL 1.1 and event callbacks Using OpenCL events for timing and profiling Using OpenCL events for asynchronous host-device communication with image reconstruction example 3 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 4. Command Queues We need to measure the performance of an application as a whole and not just our optimized kernels to understand bottlenecks This necessitates understanding of OpenCL synchronization techniques and events Command queues are used to submit work to a device Two main types of command queues In Order Queue Out of Order Queue 4 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 5. In-Order Execution In an in-order command queue, each command executes after the previous one has finished For the set of commands shown, the read from the device would start after the kernel call has finished Memory transactions have consistent view Commands To Submit clEnqueueWriteBuffer (queue , d_ip, CL_TRUE, 0, mem_size, (void *)ip, 0, NULL, NULL) clEnqueueNDRangeKernel (queue, kernel0, 2,0, global_work_size, local_work_size, 0,NULL,NULL) clEnqueueReadBuffer( context, d_op, CL_TRUE, 0, mem_size, (void *) op, NULL, NULL,NULL); Time Mem clEnqWrite EnqNDRange clEnqRead Device 0 Command Queue 5 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 6. Out-of-Order Execution In an out-of-order command queue, commands are executed as soon as possible, without any ordering guarantees All memory operations occur in single memory pool Out-of-order queues result in memory transactions that will overlap and clobber data without some form of synchronization The commands discussed in the previous slide could execute in any order on device EnqWrite Time EnqRead EnqNDRange Mem Device 0 Command Queue 6 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 7. Synchronization in OpenCL Synchronization is required if we use an out-of-order command queue or multiple command queues Coarse synchronization granularity Per command queue basis Finer synchronization granularity Per OpenCL operation basis using events Synchronization in OpenCL is restricted to within a context This is similar to the fact that it is not possible to share data between multiple contexts without explicit copying The proceeding discussion of synchronization is applicable to any OpenCL device (CPU or GPU) 7 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 8. OpenCL Command Queue Control Command queue synchronization methods work on a per-queue basis Flush:clFlush(cl_commandqueue) Send all commands in the queue to the compute device No guarantee that they will be complete when clFlush returns Finish:clFinish(cl_commandqueue) Waits for all commands in the command queue to complete before proceeding (host blocks on this call) Barrier: clEnqueueBarrier(cl_commandqueue) Enqueue a synchronization point that ensures all prior commands in a queue have completed before any further commands execute 8 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 9. Synchronization for clEnqueue Functions Functions like clEnqueueReadBuffer and clEnqueueWriteBuffer have a boolean parameter to determine if the function is blocking This provides a blocking construct that can be invoked to block the host If blocking is TRUE, OpenCL enqueues the operation using the host pointer in the command-queue Host pointer can be reused by the application after the enqueue call returns If blocking is FALSE, OpenCL will use the host pointer parameter to perform a non-blocking read/write and returns immediately Host pointer cannot be reused safely by the application after the call returns Event handle returned by clEnqueue* operations can be used to check if the non-blocking operation has completed 9 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 10. OpenCL Events Previous OpenCL synchronization functions only operated on a per-command-queue granularity OpenCL events are needed to synchronize at a function granularity Explicit synchronization is required for Out-of-order command queues Multiple command queues OpenCL events are data-types defined by the specification for storing timing information returned by the device 10 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 11. OpenCL Events Profiling of OpenCL programs using events has to be enabled explicitly when creating a command queue CL_QUEUE_PROFILING_ENABLE flag must be set Keeping track of events may slow down execution A handle to store event information can be passed for all clEnqueue* commands When commands such as clEnqueueNDRangeKernel and clEnqueueReadBuffer are invoked timing information is recorded at the passed address 11 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 12. Uses of OpenCL Events Using OpenCL Events we can: time execution of clEnqueue* calls like kernel execution or explicit data transfers use the events from OpenCL to schedule asynchronous data transfers between host and device profile an application to understand an execution flow observe overhead and time consumed by a kernel in the command queue versus actually executing Note: OpenCL event handling can be done in a consistent manner on both CPU and GPU for AMD and NVIDIA’s implementations 12 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 13. Capturing Event Information cl_int clGetEventProfilingInfo( cl_event event, //event object cl_profiling_infoparam_name, //Type of data of event size_t param_value_size, //size of memory pointed to by param_value void * param_value, //Pointer to returned timestamp size_t * param_value_size_ret) //size of data copied to param_value clGetEventProfilingInfoallows us to query cl_event to get required counter values Timing information returned as cl_ulongdata types Returns device time counter in nanoseconds 13 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 14. Event Profiling Information cl_int clGetEventProfilingInfo( cl_event event, //event object cl_profiling_infoparam_name, //Type of data of event size_t param_value_size, //size of memory pointed to by param_value void * param_value, //Pointer to returned timestamp size_t * param_value_size_ret) //size of data copied to param_value Table shows event types described using cl_profiling_infoenumerated type 14 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 15. Capturing Event Information cl_int clGetEventInfo ( cl_event event, //event object cl_event_infoparam_name, //Specifies the information to query. void * param_value, //Pointer to memory where result queried is returned size_t * param_value_size_ret) //size in bytes of memory pointed to by param_value clGetEventInfocan be used to return information about the event object It can return details about the command queue, context, type of command associated with events, execution status This command can be used by along with timing provided by clGetEventProfilingInfo as part of a high level profiling framework to keep track of commands 15 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 16. User Events in OpenCL 1.1 OpenCL 1.1 defines a user event object. Unlike clEnqueue* commands, user events can be set by the user When we create a user event, status is set to CL_SUBMITTED clSetUserEventStatus is used to set the execution status of a user event object. The status needs to be set to CL_COMPLETE A user event can only be set to CL_COMPLETE once cl_event clCreateUserEvent( cl_contextcontext, //OpenCL Context cl_int *errcode_ret) //Returned Error Code cl_memclSetUserEventStatus( cl_event event, //User event cl_int execution_status) //Execution Status 16 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 17. Using User Events A simple example of user events being triggered and used in a command queue //Create user event which will start the write of buf1 user_event = clCreateUserEvent(ctx, NULL); clEnqueueWriteBuffer( cq, buf1, CL_FALSE, ..., 1, &user_event , NULL); //The write of buf1 is now enqued and waiting on user_event X = foo(); //Lots of complicated host processing code clSetUserEventStatus(user_event, CL_COMPLETE); //The clEnqueueWriteBuffer to buf1 can now proceed as per OP of foo() 17 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 18. Wait Lists All clEnqueue* methods also accept event wait lists Waitlists are arrays of cl_event type OpenCL defines waitlists to provide precedence rules Enqueue a list of events to wait for such that all events need to complete before this particular command can be executed Enqueue a command to mark this location in the queue with a unique event object that can be used for synchronization err = clWaitOnEvents(1, &read_event); clEnqueueWaitListForEvents( cl_command_queue , int, cl_event *) clEnqueueMarker( cl_command_queue, cl_event *) 18 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 19. Example of Event CallBacks cl_int clSetEventCallback ( cl_event event, //Event Name cl_int command_exec_type , //Status on which callback is invoked void (CL_CALLBACK *pfn_event_notify) //Callback Name (cl_event event, cl_int event_command_exec_status, void *user_data), void * user_data) //User Data Passed to callback OpenCL 1.1 allows registration of a user callback function for a specific command execution status Event callbacks can be used to enqueue new commands based on event state changes in a non-blocking manner Using blocking versions of clEnqueue* OpenCL functions in callback leads to undefined behavior The callback takes an cl_event, status and a pointer to user data as its parameters 19 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 20. AMD Events Extension OpenCL event callbacks are valid only for the CL_COMPLETE state The cl_amd_event_callback extension provides the ability to register event callbacks for states other than CL_COMPLETE Lecture 10 discusses how to use vendor specific extensions in OpenCL The event states allowed are cl_queued, cl_submitted, and cl_running 20 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 21. Using Events for Timing OpenCL events can easily be used for timing durations of kernels. This method is reliable for performance optimizations since it uses counters from the device By taking differences of the start and end timestamps we are discounting overheads like time spent in the command queue clGetEventProfilingInfo( event_time, CL_PROFILING_COMMAND_START, sizeof(cl_ulong), &starttime, NULL); clGetEventProfilingInfo(event_time, CL_PROFILING_COMMAND_END, sizeof(cl_ulong), &endtime, NULL); unsigned long elapsed = (unsigned long)(endtime - starttime); 21 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 22. Profiling Using Events OpenCL calls occur asynchronously within a heterogeneous application A clFinishto capture events after each function introduces interference Obtaining a pipeline view of commands in an OpenCL context Declare a large array of events in beginning of application Assign an event from within this array to each clEnqueue* call Query all events at one time after the critical path of the application Application Function 1 Application Function 2 clEnqueueNDRangeKernel( kernel1, &event_list[i] ); clEnqueueNDRangeKernel( kernel2, &event_list [i+1] ); &event_list [i] &event_list [i+1] Event Logging Framework N = estimated no. of events in application cl_event event_list[N] Event logging framework can query and format data stored in event_list 22 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 23. Profiling with Event Information Before getting timing information, we must make sure that the events we are interested in have completed There are different ways of waiting for events: clWaitForEvents(numEvents, eventlist) clFinish(commandQueue) Timer resolution can be obtained from the flag CL_DEVICE_PROFILING_TIMER_RESOLUTION when calling clGetDeviceInfo() 23 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 24. Example of Profiling A heterogeneous application can have multiple kernels and a large amount of host device IO Questions that can be answered by profiling using OpenCL events We need to know which kernel to optimize when multiple kernels take similar time ? Small kernels that may be called multiple times vs. large slow complicated kernel ? Are the kernels spending too much time in queues ? Understand proportion between execution time and setup time for an application How much does host device IO matter ? By profiling an application with minimum overhead and no extra synchronization, most of the above questions can be answered 24 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 25. Asynchronous IO Overlapping host-device I/O can lead to substantial application level performance improvements How much can we benefit from asynchronous IO Can prove to be a non-trivial coding effort, Is it worth it ? Useful for streaming workloads that can stall the GPU like medical imaging where new data is generated and processed in a continuous loop Other uses include workloads like linear algebra where the results of previous time steps can be transferred asynchronously to the host We need two command queues with a balanced amount of work on each queue 25 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 26. Asynchronous I/O Asymptotic Approximation of benefit of asynchronous IO in OpenCL Tc = Time for computation Ti = Time to write I/P to device To = Time to read OP for device (Assume host to device and device to host I/O is same) Case 1: 1 Queue Ti Tc To Ti Tc To Total Time = 2Tc + 2Ti + 2To Case 2: 2 Queues – One device for compute Ti Tc To Note: Idle time denotes time when no IO occurs. The compute units of the GPU are busy idle Ti Tc To Total Time for 2 kernels and their I/O = 2Tc + Ti + To 26 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 27. Asynchronous I/O Time with 1 Queue = 2Tc + 2Ti + 2To No asynchronous behavior Time with 2 Queues = 2Tc + Ti + To Overlap computation and communication Maximum benefit achievable with similar input and output data is approximately 30% of overlap when Tc = Ti = To since that would remove the idle time shown in the previous diagram Host-device I/O is limited by PCI bandwidth, so it's often not quite as big a win 27 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 28. Dual DMA Engine Case In the Nvidia Fermi GPUs, dual DMA engines allow simultaneous bidirectional IO. Possible Improvement with dual DMA Engines Baseline with one queue = 3*5 = 15T Overlap Case = 7T Potential Performance Benefit ~ 50% Case 3: One device for compute but dual DMA Engines Tc ~ Ti ~ To = T Dual DMA active Ti Tc To Ti Tc To Ti Tc To Ti Tc To Ti Tc To Total Time = 7T 28 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 29. Example: Image Reconstruction Filtered Back-projection Application Multiple sinogram images processed to build a reconstructed image Images continuously fed in from scanner to iteratively improve resultant output Streaming style data flow Input Image Stack Image Source: hem.bredband.net/luciadbb/Reconstruction_presentation.pdf Reconstructed OP Image 29 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 30. Without Asynchronous I/O Single Command Queue: N images = N( tcompute + transfer) Inefficient for medical imaging applications like reconstruction where large numbers of input images are used in a streaming fashion to incrementally reconstruct an image. Performance improvement by asynchronous IO is better than previously discussed case since no IO from device to host after each kernel call. This would reduce total IO time per kernel by ½ Total time per image = Ti + Tc = 2T Overlapped Time = T (if Ti = Tc) shows ~ 50% improvement scope ComputeKernel(Image0) ComputeKernel(Image1) Copy(Image1) Copy(Image1) 30 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 31. Events for Asynchronous I/O Two command queues created on the same device Different from asymptotic analysis case of dividing computation between queues In this case we use different queues for IO and compute We have no output data moving from Host to device for each image, so using separate command queues will also allow for latency hiding Compute Queue ComputeKernel(Image0) ComputeKernel(Image1) ComputeKernel(Image2) I/O Queue Copy(Image1) Copy(Image2) Copy(Image0 31 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 32. Backprojection The Backprojection kernel isn’t relevant to this discussion We simply wish to show how kernels and buffer writes can be enqueued to different queues to overlap IO and computation More optimizations possible like buffering strategies and memory management to avoid declaring N buffers on device //N is the number of images we wish to use cl_event event_list[N]; //Start First Copy to device clEnqueueWriteBuffer (queue1 , d_ip[0], CL_FALSE, (void *)ip, 0, NULL, &event_list[0] ) ; for ( int i = 1 ; i <N ; i++) { //Wait till IO is finished by specifying a wait list of one element which denotes the previous I/O clEnqueueNDRange(queue0, Kernel, … 1,&event_list[i-1], NULL); //clEnqueueND will return asynchronously and begin IO //Enque a Non Blocking Write using other command queue clEnqueueWriteBuffer (queue1 , d_ip [i-1], CL_FALSE, (void *)ip, 0, NULL, &event_list[i]) } 32 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011
  • 33. Summary OpenCL events allow us to use the execution model and synchronization to benefit application performance Use command queue synchronization constructs for coarse grained control Use events for fine grained control over an application OpenCL 1.1 allows more complicated event handling and adds callbacks. OpenCL 1.1 also provides for events that can be triggered by the user Specification provides events to understand an application performance. Per kernel analysis requires vendor specific tools Nvidia OpenCL Profiler AMD Accelerated Parallel Processing (APP) Profiler and Kernel Analyzer 33 Perhaad Mistry & Dana Schaa, Northeastern Univ Computer Architecture Research Lab, with Ben Gaster, AMD © 2011

Editor's Notes

  1. Motivation and recap of command queues
  2. In-Order Execution of a command queue
  3. Out of order execution provides no guarantee of a command completing before another starting execution
  4. Coarse grained synchronization in OpenCL, Limited use case For eg can be used to be sure a kernel is done executing before we read back dataThis function call blocks the host.
  5. Blocking parameter provided by clEnqueue functions,dictates usage of host pointer associated with clEnqueue function
  6. Fine grained synchronization which is used for Out-of-order command queues or multiple command queues
  7. Enabling recordingOpenCL events has to be done while creating the command queue
  8. Example use cases of events
  9. Syntax for event capture
  10. Possible event states
  11. Syntax for event capture
  12. OpenCL 1.1 provides a user event which can be set by a program and not by OpenCL functions like the clEnqueue* functions
  13. Simple usage scenario for OpenCL events which can be scheduled by the user.For example, set up a write to a device on a queue after some complicated host computation on the data is finished
  14. Wait lists are simply arrays of events
  15. Callback allows launching of functions on events in OpenCL
  16. AMD extension allows for defining multiple states even in user events, The specification only provides cl_complete.
  17. By taking the difference between start and end times, we can check execution duration of kernels without overhead as gettimeofday would have
  18. Simple profiling technique to understand flow of OpenCL kernels in a command queue
  19. Using waitlists while profiling
  20. Profiling use cases
  21. Asynchronous IO which can be used to overlap kernel computation with host device communication
  22. A theoretical and asymptotic style calculation can be used to estimate potential benefits by using device.As seen from Case 1 and 2 the benefits of asynchronous IO can be seen by comparing kernel and IO time, We can at most save 2Ti or 2Tc because we would have some idle time if Ti and Tc are not equal
  23. Performance benefit for one balanced command queue (no idle time)
  24. Fermi GPUs can allow more overlap because of dual dma engines which increases the amount of hiding as shown
  25. Example streaming application which can benefit from overlapped computation and communication
  26. In this application the device to host IO only occurs at end of the reconstruction
  27. In this application the device to host IO only occurs at end of the reconstruction
  28. Kernel call and event list which can allow moving data asynchronously