Seminartopics.in

IT Seminar Topics >> Compute Unified Device Architecture CUDA Seminar Report, PPT, PDF
 

 

Abstract of Compute Unified Device Architecture CUDA

CUDA (an acronym for Compute Unified Device Architecture ) is a parallel computing architecture developed by NVIDIA . CUDA is the computing engine in NVIDIA graphics processing units (GPUs) that is accessible to software developers through variants of industry standard programming languages. Programmers use 'C for CUDA' (C with NVIDIA extensions and certain restrictions), compiled through a PathScale Open64 C compiler to code algorithms for execution on the GPU. CUDA architecture shares a range of computational interfaces with two competitors -the Khronos Group 's Open Computing Language and Microsoft's DirectCompute . Third party wrappers are also available for Python , Perl , Fortran , Java , Ruby , Lua , MATLAB and IDL .

CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs. Using CUDA, the latest NVIDIA GPUs become accessible for computation like CPUs . Unlike CPUs however, GPUs have a parallel throughput architecture that emphasizes executing many concurrent threads slowly, rather than executing a single thread very fast. This approach of solving general purpose problems on GPUs is known as GPGPU .

In the computer game industry, in addition to graphics rendering, GPUs are used in game physics calculations (physical effects like debris, smoke, fire, fluids); examples include PhysX and Bullet . CUDA has also been used to accelerate non-graphical applications in computational biology , cryptography and other fields by an order of magnitude or more. An example of this is the BOINC distributed computing client.

CUDA provides both a low level API and a higher level API. The initial CUDA SDK was made public on 15 February 2007, for Microsoft Windows and Linux . Mac OS X support was later added in version 2.0, which supersedes the beta released February 14, 2008. CUDA works with all NVIDIA GPUs from the G8X series onwards, including GeForce , Quadro and the Tesla line. NVIDIA states that programs developed for the GeForce 8 series will also work without modification on all future NVIDIA video cards, due to binary compatibility

CUDA is NVIDIA's parallel computing architecture. It enables dramatic increases in computing performance by harnessing the power of the GPU.

With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are finding broad-ranging uses for CUDA, including image and video processing, computational biology and chemistry, fluid dynamics simulation, CT image reconstruction, seismic analysis, ray tracing, and much more.

Computing is evolving from "central processing" on the CPU to "co-processing" on the CPU and GPU. To enable this new computing paradigm, NVIDIA invented the CUDA parallel computing architecture that is now shipping in GeForce, ION, Quadro, and Tesla GPUs, representing a significant installed base for application developers


|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

|

 

You may also like this :

Dynamic Memory Allocation
Dynamic Synchronous Transfer Mode
Dynamic Virtual Private Network
Dynamically Reconfigurability Computing
Earth Simulator
EDGE
Elastic Quotas
Ethernet Passive Optical Network
Extreme Programming (XP)
Fiber Distributed Data Interface
Firewalls
Free Space Laser Communications
Freenet
Wine
Virtual Storage Access Method
Trojan Horse
Team Viewer
Tablet Personal Computer
Surface Computer
Streaming Media
SCADA
SAP R/3 Architecture
Redtacton
Portable Microcomputer-Based Instruments
Pico Projectors
Multirate Signal Processing Techniques MRSP
Multicast Security
MongoDB
Mobile Number Portability
HVDC Light

 
Copyright © 2012 www.seminartopics.in      Contact us: seminar990@gmail.com