Ati 3450 / 8400gs?
ATI 3450
8400GS
Ati 3450 / 8400gs?
3450 beats it performance and feature wise, but dnt know the price in bd
Jara ATI use kore tara keno bujhe na je ATI display driver consumes more RAM than Nvidia's. Plus ATI tray tool vista64-e use-e kora jay na. Tobuo keu keu ATI ATI bole lafay. How ridiculous!
CROSS COM ACTIVATED!
consumes 10-15mb more ram. that is negligible out of 2GB. thats like opening a new tab in firefox.
and vista x64? everything works perfectly fine. maybe its a problem with your drivers, try reinstalling them
jeitar clock speed beshi seita kina uchit!!
@Mazhar: Have you ever used ATI Tray Tool in Vista64 and controlled your fan? THEN YOU HAVE SOME SUPERNATURAL POWER!
CROSS COM ACTIVATED!
controlling fan speed is easy in vista x64, you simply need some 3rd party softwares. you can also edit many other features in those softwares (ie. OC, LOD, etc). I can name them if you want me to, and its very simple
and shobai ekhon ATI ATI boley lafay simply because of their performance, price and feature set.
and if youre thinking im a ATI fan, im not. i always recommend whichever product gives the most bang for the buck, never because i support a company
edit: sorry for the OT, il stick to what i said in my first post![]()
ATI-e shader clock bole kisu nai. Shob-e stream processor dependent. And those stream processors are not get fully used in most games (except, Half Life, Portal etc.).
CROSS COM ACTIVATED!
Streaming proccesors are the same as shaders. just that they are named differently since one ATI's shader can do five operations per clock. so for example their 64 shaders are shown as 320 stream processors (64x5) for the HD3870. and for nvidia, one shader can do one operation per clock, but the clockspeed is higher. so their 64 shaders are shown as 64 shaders in the 9600GT
ATI's shader clock also runs at the same speed as the core clock. if you increase the core clock, the shader clock is automatically increased by the same amount. for nvidia, the shader clock is adjustable separately
i hope i have been able to explain it..
But you forgot, most games can't code all the Stream processors provided by ATI.
CROSS COM ACTIVATED!
Ati 3450 ar pixal shader koto?
"Football For Hope"
Shader Model 4.0
CROSS COM ACTIVATED!
Wrong! Its Shader Model 4.1 and its DirectX 10.1
Giddy can be traced back to the same Germanic root *gud– that has given us the word God
So what?!! 10.1 and 4.1 gives you any huge bang? (no offence)
CROSS COM ACTIVATED!
There is actually no difference between dx 10 vs. 10.1. Just customer attraction policy.
CROSS COM ACTIVATED!
tomar kothar mane bujhlam na....... ja jano na ta nie kotha na bolai bhalo.... ATI tray tool er moto 1000 ta app ase ja Vista 64 Bit support kore. ar ati er driver ram besi khai to ki hoise?? tumi ati er card kinle jei taka bachbe oi taka die ram kine nio. overall valo cholbe...
hmm...lets see
assasins creed had a 20% increase in performance due to dx10.1 due to more efficient post filtering in dx10.1 compared to dx10
and diablo 3, along with many other titles coming up, support dx10.1 fully. lets see how the customers get attracted to now, im sure the performance increases of dx10.1 will surely attract them![]()
Just Think of DirectX 10 as a Beta version & 10.1 Full & finall Version
Improvements over DirectX 10
As I'm sure you've already fathomed by now, DirectX 10.1 will be a superset of DirectX 10 - That is, it will support everything that DirectX 10 does (and thus all DirectX 10 compliant parts), but then add more in the way of features and performance to that offered by the base level of DirectX 10. So, before we start looking at additions to the DirectX 10.1 feature set, let's talk about where we'll be seeing improvements in the API.
One of the main improvements touted by Microsoft in DirectX 10.1 is improved access to shader resources - In particular, this involves better control when reading back samples from multi-sample anti-aliasing. In conjunction with this, the ability to create customised downsampling filters will be available in DirectX 10.1.
Floating point blending also gets some new functionality in DirectX 10.1, more specifically when used with render targets - New formats for render targets which support blending will be available in this iteration of the API, and render targets can now be blended independently of one another.
Shadows never fail to be an important part of any game title's graphics engine, and Direct3D 10.1 will see improvements to the shadow filtering capabilities within the API, which will hopefully lead to improvements in image quality in this regard.
On the performance side of things, DirectX 10.1 will allow for higher performance in multi-core systems, which is certainly good news for the ever growing numbers of dual-core users out there. The number of calls to the API when drawing and rendering reflections and refractions (two commonly used features in modern game titles) has been reduced in Direct3D 10.1, which should also make for some rather nice performance boosts. Finally, another oft-used feature, cube mapping, gets its own changes which should help with performance, in the form of the ability to use an indexable array for handling cube maps.
Additions over DirectX 10
One of the major additions which will impact image quality in DirectX 10.1 regards precision, in a couple of different disciplines. Firstly, this revision of the API will see the introduction of 32-bit floating-point filtering over the 16-bit filtering currently on show in DirectX 9 and 10 - This should see improvements to the quality of High Dynamic Range rendering which use this functionality over what is currently available. On top of this, overall precision throughout the rendering pipeline will also be increased, although to what level doesn't seem to have been publically specified at present. These increases in precision could make for an interesting challenge for the graphics IHVs, as it seems likely they'll be needing to spend a large number of transistors in future parts just to match these new requirements, let alone ekeing decent performance out of their GPUs when dealing with higher precisions than those we have seen thus far.
Again looking towards improvements on the image quality front, DirectX 10.1 will also see the introduction of full application control over anti-aliasing. This will allow applications to control the usage of both multi-sample and super-sample anti-aliasing, as well as giving them the ability to choose sample patterns to best suit the rendering scenario in a particular scene or title. Finally, these changes in DirectX 10.1 give the application control over the pixel coverage mask, a mask which is used to help to quickly approximate sampling for an area of pixels. This in particular should prove to be a boon when anti-aliasing particles, vegetation, scenes with motion blur and the like. All of this additional control handed to the application could allow for anti-aliasing to be used much more wisely and effectively, and controlled by game developers themselves, rather than the current 'all or nothing' implementation available, which basically amounts to a simple on-off switch.
To add further to the additional focus on anti-aliasing in DirectX 10.1, support for a minimum of four samples per pixel (in other words, 4x anti-aliasing) is now required (Although this doesn't necessarily mean that support for 2x anti-aliasing in hardware and drivers is a thing of the past).WDDM 2.1
Lastly, we come to one final major change which will be seen in DirectX 10.1 - Whereas DirectX 10 will see the introduction of support for WDDM (Windows Driver Display Model) 2.0, DirectX 10.1 moves this on a step as the driver model moves up to 2.1. Again, be sure to have read our look at WDDM 2.0 before you proceed to understand what this driver model is all about. Needless to say, WDDM 2.1 does everything WDDM 2.0 does, but with a couple of significant additions, mainly aimed at improving performance on DirectX 10.1 capable GPUs further still.
First on the list for WDDM 2.1 is further improvements to context switching abilities - This was improved significantly with the introduction of WDDM 2.0, where context switching could be performed after processing a command or triangle (compared to what is required prior to WDDM 2.0, where whole buffers needed to be completely worked through before a context switch could be performed). With WDDM 2.1 however, context switching can now be performed instantly. This means that a context switch is guaranteed when requested with WDDM 2.1, which isn't necessarily the case under WDDM 2.0 when long shaders or large triangles are being processed, whilst retaining the same average switching time between 2.0 and 2.1.
Due to the amount of threads in use and work being done in parallel at any one time on a GPU, efficient context switching (which basically involves switching between the various threads the hardware is working on) is a vital part of processing work on a GPU, so the removal of as much overhead as possible when context switching is most welcome. This is all the more important under Windows Vista, as the possibility of the GPU having to work on more than one application that requires graphics rendering in some shape or form becomes greater, and thus the need to shift seamlessly between different rendering workloads without one application taking up all of the GPU's rendering time increases further still.
The other major addition to WDDM 2.1 is a change to the way the GPU and its driver handles page faults - In WDDM 2.0, a page fault (i.e. a request for data that is not currently loaded into memory) triggers a rather long-winded process, which involves the GPU stalling, informing the Operating System of the missing page, and then restarting again once the correct page (piece of data) has been loaded into memory. In WDDM 2.1, things are handled far more gracefully, thanks to the GPU having additional support to handle page faulting and virtual memory allocated on a per process basis. This means that when a page fault crops up, the GPU doesnt have to stall, and instead the improved context switching capabilities we mentioned earlier are put to good use to switch to the next item on the agenda that needs to be worked on rather than sitting around idly waiting for the correct data to be made available.
As we mentioned earlier in this article, the implementation of WDDM is likely to be one of the main reasons why we don't see DirectX 10.1 compliant hardware springing up any time soon - Quite simply, solving these context switching and page faulting issues isn't a trivial task from either a hardware or driver point of view, and thus a massive amouont of work and resources will have to go into implementing the full 2.1 Windows Driver Display Model as required to gain compliancy. Add to that the necessity to make the other changes required of this point release of the API, and the constant demand of users to see increased performance across the board, in the enthusiast space in particular, then you can see that getting everything together to create a compliant part is going to be a tall order for a little while yet.
Giddy can be traced back to the same Germanic root *gud– that has given us the word God