Inside DirectX 11 – Dx11
The Evolution and Features that can mark Success for Windows 7 – Direct X 11 from dx10 and dx9
No doubt, Windows is the best platform for Gaming. Other platforms, due to lack of proper driver support and optimizations (Linux, Mac OS); never lived upto the competition. What we mean by Best platform is: If you have a good Graphics GPU and a great Processor, you can beat the performance of PS3 and Xbox 360.
Note: This article is part of Why Windows 7 series.
Prevalently, DirectX nowadays is a mark of success over the other standards. But, we tend to forget that a decade ago, war waged between Microsoft and Silicon Graphics in the field of 3D APIs has lived for years. Part of it was to do with the success of the Quake engine, which paved the way to solid support for OpenGL for the makers of 3D cards in their drivers.
Microsoft was starting from scratch, and the learning curve was exponential. So, for several years, Direct3D’s capabilities were below the curve, with an API that many programmers found a lot more confusing than OpenGL’s. With each new version of Direct3D, it gradually began to catch up with OpenGL.
In Direct X8, for the first time, Microsoft’s API did more than just copy from SGI. It actually introduced innovations of its own like support for vertex and pixel shaders. With DirectX 9, Microsoft managed to strike a decisive victory, imposing its API on developers.
There’s a good chance that Direct3D 11 will prove to be a more important page in the history of the API than version 10 was. While Direct3D 10 was a complete revision, brining incompatibility, Microsoft has now put enough distance between it and this new version to correct the problems raised by the first major overhaul of its API. So you could call Direct3D 11 a major update with potential fixes to new features in Dx10. It re-uses all the concepts that were introduced with Direct3D 10, and is compatible with the preceding version and with the preceding generation’s hardware. It’ll be available not only on Windows 7, but also on Vista. This widens the gaming horizon.
A typical game’s development phase is between 2 and 4 years. So by the time a game that is just now starting its development phase is released, Direct3D 11 will be already well established for PCs, since it’ll run on all PCs shipped with Windows 7 and work on the great majority of PCs running Vista.
The Difference is here:
The best way to praise Directx 11 is to compare it with previous versions. Here is a screenshot that gives you slight idea:
Direct x – dx9 vs. dx11
You might have to concentrate to see what’s changed between the two versions. Hint: It’s not about better shape and natural colors.
Dx10 vs. dx11
In that first shot with the swine flu guy, it seems the one on the right is a bit nicer looking, as in his head and mask don’t look as polygonal as the one on the left. The two vents in his mask are actually round instead of octagonal, and there are more details all around. and the one on right has more Natural colors than amplified ones on left.
The second shot, has more to do with Digital Vibrancy and detail. The right image gives a better lighting to the far off land across the gate.
In the stills, the difference will be minor, but it becomes much more noticeable while you are in the game.
Let’s have a brief look on the Core changes to DirectX 11:
One may ask, “We’ve had multi-core CPUs for several years now and developers have learned to use them. So multi-threading their rendering engines is nothing new with Direct3D 11“. Well, this may come as a surprise to most, but current engines still use only a single thread model for 3D rendering. The other threads are used for sound, decompression of resources, physics, etc. But rendering is a heavy user of CPU time, so why not thread it, too? There are a several reasons, some of them related to the way GPUs operate and others to the 3D API. So Microsoft set about solving the latter and working around the former.
Multithreaded GPU -via TomsHardware
After all, there’s only one GPU (even when several of them are connected via SLI or CrossFire, they create one virtual GPU) and consequently only one command buffer. When a single resource is shared by several threads, mutual exclusion (mutex) is used to prevent several threads from writing commands simultaneously and stepping on each others’ feet. That means that all the advantages of using several threads are canceled out by the critical section, which serializes code.
No API, today, can solve this problem—it’s inherent in the way the CPU and GPU communicate.
But Microsoft is offering an API that can try to work around it, if not solve directly. Direct3D 11 introduces secondary command buffers that can be saved and used later.
Direct 3D 11
So, each thread has a deferred context, where the commands written are recorded in a display list that can then be inserted into the main processing stream. Obviously, when a display list is called by the main thread (the “Execute” in the “Multi-threaded Submission” diagram below) it has to be ascertained that its thread has finished filling it. So there’s still synchronization, but this execution model at least allows some of the rendering work to be parallelized, even if the resulting acceleration won’t be ideal.
As one would anticipate, a large share of the workload was still on the main thread, which was already overloaded. That doesn’t ensure good balance, needed for good execution times. So, Microsoft has introduced a new interface with Direct3D 11: a programmer can create one Device object per thread, which will be used to load resources. Synchronization within the functions of a Device is more finely managed than in Direct3D 10 and is much more economical with CPU time.
Thanks to tessellation, the potential is enormous. It, now, becomes possible to do without the normal map and implement a level of detail directly on the GPU, allowing the use of very detailed models (several million polygons instead of 10,000 or so with current games)—at least in theory. Can Direct3D 11 cards avoid these pitfalls practically? It’s too early to say, but in any event not everybody is convinced, and id Software is working on solving the same geometry problem with a completely different approach based on ray casting with voxels.
Nvidia’s CUDA marked an evolution. Microsoft wasn’t about to let the GPGPU market get away and now has its own language for using the GPU. The model they chose, like OpenCL, appears to be quite similar to CUDA, confirming the clarity of Nvidia’s vision. The advantage over the Nvidia solution lies in portability—a Compute Shader will work on an Nvidia or ATI GPU and on the future Larrabee, plus feature better integration with Direct3D, even if CUDA does already have a certain amount of support. But we won’t spend any more time on this subject, even if it is a huge one. Instead, we’ll look at all this in more detail in a few months with a story on OpenCL and Compute Shaders.
Graphics will mark a new step with Direct X 11. Are you in ?
Comments are closed.