User Tag List

Results 1 to 9 of 9

Thread: Nvidia purposefully hobbles PhysX on the CPU ................Real World Tech proves Nvidia's de-optimizations

  1. #1
    Member
    • avas911's Gadgets
      • Motherboard:
      • Gigabyte GA-EG41MF-US2H
      • CPU:
      • Intel Pentium Dual Core E6500 2.9 GHz 2MB L2 1066MHz FSB
      • RAM:
      • 2x2GB 800 MHz Apecer at 5-5-5-15
      • Hard Drive:
      • OCZ Vertex 3 120GB Sata III & Samsung 103SJ 1 TB F3
      • Graphics Card:
      • Sapphire ATI RADEON HD6850 1GB GDDR5
      • Display:
      • Philips 107S7 17" at [email protected]
      • Sound Card:
      • Built In Realtek ALC883
      • Speakers/HPs:
      • Creative SBS A200 / Cosonic Generic / Logitech Ultimate Ears 200vi/SoundMAGIC E10M IEM
      • Keyboard:
      • A4Tech
      • Mouse:
      • A4Tech X7 XL-747H 3600 DPI
      • Controller:
      • None
      • Power Supply:
      • Delta GPS-500AB A 500W
      • Optical Drive:
      • Asus 16x IDE DVD R
      • USB Devices:
      • Transcend 500 8GB & Corsair Survivor USB 3.0 16GB & Samsung Class 10 16GB mSDHC
      • UPS:
      • Rahimafrooz 600VA Premium
      • Operating System:
      • Win7 Ultimate 64Bit
      • Comment:
      • Slow in gaming
      • ISP:
      • Link3 512
      • Download Speed:
      • 70
      • Upload Speed:
      • 70
    avas911's Avatar
    Join Date
    Nov 2008
    Location
    Mohammadpur
    Posts
    4,251

    Default Nvidia purposefully hobbles PhysX on the CPU ................Real World Tech proves Nvidia's de-optimizations

    ==================================================================
    NVIDIA JUST HAD one of their most sleazy marketing tactics exposed, that PhysX is faster on a GPU than a CPU. As David Kanter at Real World Tech proves, the only reason that PhysX is faster on a GPU is because Nvidia purposely hobbles it on the CPU. If they didn't, PhysX would run faster on a modern CPU.

    The article itself can be found here, and be forewarned, it is highly technical. In it, Kanter watched the execution of two PhysX enabled programs, a game/tech demo called Cryostasis, and an Nvidia program called PhysX Soft Body Demo. Both use PhysX, and are heavily promoted by Nvidia to 'prove' how much better their GPUs are.

    The rationale behind using PhysX in this way is that Nvidia artificially blocks any other GPU from using PhysX, going so far as to disable the functionality on their own GPUs if an ATI GPU is simply present in the system but completely unused. The only way to compare is to use PhysX on the CPU, and compare it to the Nvidia GPU version.

    If you can imagine the coincidence, it runs really well on Nvidia cards, but chokes if there is an ATI card in the system. Frame rates tend to go from more than 50 to the single digits even when you have an overclocked i7 and an ATI HD5970. Since this setup is vastly faster than an i7 and a GTX480 in almost every objective test, you might suspect foul play if the inclusion of PhysX drops performance by an order of magnitude. As Real World Tech proved, those suspicions would be absolutely correct.

    How do they do it? It is easy, a combination of optimization for the GPU and de-optimization for the CPU. Nvidia has long claimed a 2-4x advantage for GPU physics, using their own PhysX APIs, over anything a CPU can do, no matter what it is or how many there are. And they can back it up with hard benchmarks, but only ones where the Nvidia API is used. For the sake of argument, lets assume that the PhysX implementations are indeed 4x faster on an Nvidia GPU than on the fastest quad core Intel iSomethingMeaningless.

    If you look at Page 3 of the article, you will see the code traces of two PhysX using programs. There is one thing you should pay attention to, PhysX uses x87 for FP math almost exclusively, not SSE. For those not versed in processor arcana, Intel introduced SSE with the Pentium 3, a 450MHz CPU that debuted in February of 1999. Every Intel processor since has had SSE. The Pentium 4 which debuted in November of 2000 had SSE2, and the later variants had SSE3. How many gamers use a CPU slower than 450MHz?

    Of the SSE variants, the one that matters here is SSE, but SSE2 could also be quite relevant. In any case, Intel hasn't introduced a CPU without SSE or SSE2 in almost a decade, 9 years and a few days short of 8 months to be precise. For the sake of brevity, we will lump SSE, SSE2, and later revisions in to one basket called SSE.

    AMD had a similar API called 3DNow!, but the mainstream K8/Athlon64/Opteron lines had full SSE and SSE2 support since May of 2004. Some variants of the K7 had SSE with a different name, 3DNow! Professional, for years prior to that.

    Basically, anything that runs at 1GHz or faster has SSE, even the Atom variants aimed at phones and widgets supports full SSE/SSE2. Nothing on the market, and nothing that was on the market for years prior to the founding of Ageia, the originator of PhysX that Nvidia later bought, lacked SSE.

    To make matters worse, x87, the 'old way' of doing FP math, has been deprecated by Intel, AMD and Microsoft. x64 extensions write it off the spec, although you still can make it work if you are determined to, but it won't necessarily be there in the future. If you don't have a damn good reason to use it, you really should avoid it.

    What's more, x87 is vastly slower than using SSE. x87 is stack based, meaning that to do an operation with x87, you need to push things to the stack, use instructions like FXCH to manipulate it, and push a lot to memory needlessly. Simply using the equivalent SSE instruction instead of x87 will net you about 20% more speed. You can design a pathalogical case where SSE is slower than x87, but you would have to go out of your way to make it happen. I am pretty sure Nvidia will demo this kind of 'valid benchmark' in the near future, a purposefully designed pathological case that proves their point. In the real world, multiple game developers, assembly experts, and chip designers, spoken to for this article can't think of a situation where SSE is slower.

    As Real World Tech pointed out, the Ageia PhysX chip used 32 bit math, and the now Nvidia PhysX programs likely do as well. The code runs on G80 based GPUs, and they did not have DP FP capabilities. This means that you can pack 4 of those data points into a 128 bit number.

    Why is this important? SSE has scalar and vector variants for instructions. Scalar basically means one piece of data per instruction, and that is where you get the ~20% speedup over x87. Vector allows you to do the math on 1 128 bit instruction, 2 64 bit instructions, or 4 32 bit instructions simultaneously. Since PhysX uses 32 bit numbers, you can do 4 of them in one SSE instruction, so four per clock. Plus 20%. Lets be nice to Nvidia and assume only a 4x speedup with the use of vector SSE.

    What does this mean? Well, to not use SSE on any modern compiler, you have to explicitly tell the compiler to avoid it. The fact that it has been in every Intel chip released for a decade means it is assumed everywhere. Nvidia had to go out of their way to make it x87 only, and that wasn't by accident, it could not have been.

    If they didn't go out of their way, the 2-4x speed increase by using a GPU for PhysX would be somewhere between half as fast and about equal to a modern CPU. Even if you use numbers that are generous to Nvidia, PhysX would be SLOWER ON THE GPU IN EVERY CASE. To top it off, there is NO technical reason for Nvidia to use x87, SSE is faster in every case we could find.

    But it gets worse. Far worse. The two programs that Real World Tech looked at, and others looked at by SemiAccurate, are single theaded. That means the CPU can only run PhysX on one core at a time. Multi-threading an API like PhysX is hard work, but Nvidia has already done that.

    GPUs have lots of 'cores', the GTX285 for example has 240 of them, ATI's HD5870 has 1600, and Northern Islands has...nah, that would be telling. Without quibbling over the definition of 'core', we will just take Nvidia's statement at face value and assume 240 cores. If they can get 4x the performance of a single Intel core with 240 of their cores, each Nvidia core is worth 1/60th of an Intel core, or less. If the PhysX code didn't thread well, and it does, GPU physics would run slower than a dishwasher controller on heavy painkillers.

    So the PhysX code is threaded when run on the GPU, but not on the CPU. On consoles, the XBox360 and PS3 specifically, the code is threaded just fine. (Note: The Wii only has a single core without any kind of SMT, so threading won't help that playform) None of the consoles have a CUDA capable GPU, something that Nvidia claims is necessary for GPU physics. The PS3 uses a variant of the G70 or G71 for it's GPU, the first Nvidia product that supported PhysX is the G80.

    All the consoles run PhysX just fine, and the frame rate doesn't suffer the same order of magnitude performance decrease as a CPU would. Why? Because Nvidia allowed the code running on console CPUs to use multiple threads to do the work, and ported it to AltiVec, the PowerPC vector instruction set. With that, a console that barely has the power of a low end P4, will run PhysX, the game, and everything else just fine. Gosh, what might that infer?

    Most modern games fully use less than 2 CPU cores, and most gaming PCs now have 4 cores, the newest ones have 6. Nvidia will not allow PhysX to run on the other 2-4 cores that are basically idle when gaming. If they only allowed a second thread to run PhysX, you would double the speed at a minimum.

    Since everything is in one thread for the programs that Real World Tech looked at, PhysX isn't even fully utilizing a single core, so adding more threads would almost assuredly mean far more than a 2x speedup. On a 4 core CPU, you could easily get a 4x speedup from even basic threading, far less than Nvidia has done for consoles.

    The problem is that the 4x speedup from threading would once again erase the 2-4x 'advantage' from running PhysX on the GPU. Threading would once again relegate GPU PhysX to somewhere between half as fast and barely equal to a modern Intel CPU. See the problem? To 'fix' this 'problem', Nvidia won't thread PhysX on the PC CPU, something that they do on every other platform the API is available for.

    We are told that Nvidia claims that threading Physx on the CPU is not their problem, it is up to the game developers to implement. Only on the PC though, for the rest they are happy to make the effort. Like the "no one wants SSE, game developers clamor for x87 code" line of bull they spew, this is nothing more than plausible deniability for the technically unaware. Then again, after years of hype, the number of games released that use PhysX on the GPU for anything more than trivial eye candy can be counted on one hand. Make of that what you will.

    Imagine if instead of purposefully de-optimizing PhysX for the CPU, Nvidia instead just did what they do for every other platform, IE not restrict the instruction set use for PR purposes and thread the code. On a modern 4 core CPU, you would get 4x speed increase from SSE, and a 4x increase from threading. Math says that would get you a 16x increase in speed, more than the decrease that you see going from GPU PhysX to CPU PhysX today.

    The 2-4x advantage that Nvidia claims for the GPU is only when they hobble the CPU. If they didn't, the CPU would have a 4-8x performance advantage on Nvidia's own API. Havok and Bullet physics APIs seem to do just fine, better than PhysX actually, when running on the CPU. For some unknown reason, it is only the physics API by the GPU-only vendor that has problems on modern CPUs. Anyone have a clue why this is the case?

    To take this a step farther, if you de-optimized the GPU version of PhysX in the same way that Nvidia does to the CPU version, imagine what would happen? To start with, on a GTX285, executing one instruction per clock would mean going from a '2-4x advantage' over the CPU to a 60-120x disadvantage over de-optimized CPU code. With the simple threading and SSE optimizations above, the CPU would run it 960-1920x faster than single threaded GPU code. Even a lowly Atom CPU would probably be 100x faster than single threaded GPU PhysX code. If you take away vectorization as well, the GPU performance drops yet farther.

    In the end, there is one thing that is unquestionably clear, if you remove the de-optimizations that Nvidia inflicts only on the PC CPU version of PhysX, the GPU version would unquestionably be slower than a modern CPU. If you de-optimize the GPU version in the same way that Nvidia hobbles the CPU code, it would likely be 1000x slower than the CPU. If Nvidia didn't cripple CPU PhysX, they would lose to the CPU every time.

    One thing you can be sure about, Nvidia will react to the Real World Tech article with FUD and tame attack sites. The official drums about no developer wanting SSE and how threading is up to game developers will have a few more technically devoid talking points added, and Nvidia innocence will be proclaimed. It doesn't matter, the GPU is the wrong thing to run physics on, it is slower than the CPU at that task. Period. This won't stop Nvidia from saying the exact opposite though, facts don't seem to get in the way of their company's PR statements.

    If Nvidia wants to prove that PhysX is actually faster on the GPU, I will offer them a fair test. Give me the code tree for PhysX and the related DLL, and I will have them re-compiled for GPUs and then optimized the CPU version with some minor threading and vectorized SSE. Then I will run the released PHysX supporting games on both DLLs as a benchmark. How about it guys? If your PR claims are anything close to true, what do you have to lose?S|A
    ==============================================================
    Source


    P.S: I am sick of nVidia at the moment.....

    P.S.2: If you didnt understand....nVidia purpose fully used a method in implementing physics on cpu...so that it performs worse than(read multiple times worse) its own GPUs...

    ---------- Post added at 21:47 ---------- Previous post was at 21:36 ----------

    To surmise the summation:

    Nvidia, in order to keep PhysX proprietary, kept PhysX optimised in a code that gpu's, their gpu's, can read easily but cpu's can't... not because cpu's couldn't but because cpu's stopped reading that sort of code over five years ago!

    To wit:

    Nvidia want to sell gpu's. It helps Nvidia not one iota for games developers to be able to instigate physX on the cpu (more efficient, faster) with SSE (Simple Succinct Easy) because Nvidia don't make cpu's! Doh!
    Last edited by avas911; July 8th, 2010 at 22:40.

  2. #2
    Member
    • Aunee's Gadgets
      • Motherboard:
      • Asus P7H555
      • CPU:
      • Intel Core i5 760 8mb cash 2.80GHz OCed to 4.0GHz
      • RAM:
      • 4Gb + 4Gb DDR3 1333bus
      • Hard Drive:
      • Samsung 120gb ata + Samsung 500gb sata
      • Graphics Card:
      • Sapphire AMD 6670 GDDR5 1GB
      • Display:
      • 19inch Samsung SyncMaster 943 LCD
      • Speakers/HPs:
      • Panasonic 5:1
      • Keyboard:
      • A4 TECH multimedia combo keyboard
      • Mouse:
      • A4 TECH X6-20MD
      • Controller:
      • Sonic-IT PU-303 PS3/x-box 360 combo joypad
      • Optical Drive:
      • HP DVD-RW
      • Operating System:
      • Windows 7 Ultimate 64bit
      • Comment:
      • Not so good, but I still love it ^.^
      • ISP:
      • BTCL
      • Download Speed:
      • 2mbps
      • Upload Speed:
      • 1mbps
    Aunee's Avatar
    Join Date
    May 2010
    Location
    বলবো না!
    Posts
    194

    Default Re: Nvidia purposefully hobbles PhysX on the CPU ................Real World Tech proves Nvidia's de-optimizations

    এই জিনিসটা বুঝতে অনেকেরই কষ্ট হবে, আসলে বিষয় টা হলো nVidia তাদের business increase করতে চাচ্ছে। তাই এই system use করছে যাতে তারা ATI এর সাথে টক্কর মারতে পারে, কিন্তু আমার মনে হচ্ছে nVdia এই system এ বেশি দিন এগোতে পারবে না...
    খাই-দাই গান গাই, তাইরে নাইরে না!

  3. #3
    Member
    • avas911's Gadgets
      • Motherboard:
      • Gigabyte GA-EG41MF-US2H
      • CPU:
      • Intel Pentium Dual Core E6500 2.9 GHz 2MB L2 1066MHz FSB
      • RAM:
      • 2x2GB 800 MHz Apecer at 5-5-5-15
      • Hard Drive:
      • OCZ Vertex 3 120GB Sata III & Samsung 103SJ 1 TB F3
      • Graphics Card:
      • Sapphire ATI RADEON HD6850 1GB GDDR5
      • Display:
      • Philips 107S7 17" at [email protected]
      • Sound Card:
      • Built In Realtek ALC883
      • Speakers/HPs:
      • Creative SBS A200 / Cosonic Generic / Logitech Ultimate Ears 200vi/SoundMAGIC E10M IEM
      • Keyboard:
      • A4Tech
      • Mouse:
      • A4Tech X7 XL-747H 3600 DPI
      • Controller:
      • None
      • Power Supply:
      • Delta GPS-500AB A 500W
      • Optical Drive:
      • Asus 16x IDE DVD R
      • USB Devices:
      • Transcend 500 8GB & Corsair Survivor USB 3.0 16GB & Samsung Class 10 16GB mSDHC
      • UPS:
      • Rahimafrooz 600VA Premium
      • Operating System:
      • Win7 Ultimate 64Bit
      • Comment:
      • Slow in gaming
      • ISP:
      • Link3 512
      • Download Speed:
      • 70
      • Upload Speed:
      • 70
    avas911's Avatar
    Join Date
    Nov 2008
    Location
    Mohammadpur
    Posts
    4,251

    Default Re: Nvidia purposefully hobbles PhysX on the CPU ................Real World Tech proves Nvidia's de-optimizations

    actually এগোতে পারতেছে না...ওদের ১২টা বাজতেছে...ঃ(
    Last edited by avas911; July 9th, 2010 at 01:02.

  4. #4
    Member
    • knroeoueqisk's Gadgets
      • Motherboard:
      • Gigabyte H55M-USB3 (Socket 1156)
      • CPU:
      • Intel Core i5 750
      • RAM:
      • Twinmos 2.0GB Single-Channel DDR3 1333
      • Hard Drive:
      • SAMSUNG 1 TERABYTE F3 & 250 GB F1 Sata Hard Drives
      • Graphics Card:
      • GIGABYTE ATI Radeon HD 5770 1GB
      • Display:
      • Samsung SyncMaster LCD [email protected] 1360x768
      • Sound Card:
      • Realtek High Definition Audio
      • Speakers/HPs:
      • Cosonic head phone
      • Keyboard:
      • Perfect keyboard
      • Mouse:
      • Delux optical mouse
      • Controller:
      • Normal Non brand gamepad
      • Power Supply:
      • Thermalake TR2 500 watt
      • Optical Drive:
      • Samsung 22X Dvd Writer
      • Operating System:
      • Windows XP Professional 32-bit SP3 and Windows 7
      • Comment:
      • i am happy with it.
      • ISP:
      • Qubee
      • Download Speed:
      • 15 Kilobytes - 64 Kilobytes
      • Upload Speed:
      • 5-15 kilobytes

    Join Date
    Dec 2009
    Location
    Dhaka
    Posts
    2,085

    Default Re: Nvidia purposefully hobbles PhysX on the CPU ................Real World Tech proves Nvidia's de-optimizations

    bohut fazil nvidia.:mad:

  5. #5
    Member
    • Aunee's Gadgets
      • Motherboard:
      • Asus P7H555
      • CPU:
      • Intel Core i5 760 8mb cash 2.80GHz OCed to 4.0GHz
      • RAM:
      • 4Gb + 4Gb DDR3 1333bus
      • Hard Drive:
      • Samsung 120gb ata + Samsung 500gb sata
      • Graphics Card:
      • Sapphire AMD 6670 GDDR5 1GB
      • Display:
      • 19inch Samsung SyncMaster 943 LCD
      • Speakers/HPs:
      • Panasonic 5:1
      • Keyboard:
      • A4 TECH multimedia combo keyboard
      • Mouse:
      • A4 TECH X6-20MD
      • Controller:
      • Sonic-IT PU-303 PS3/x-box 360 combo joypad
      • Optical Drive:
      • HP DVD-RW
      • Operating System:
      • Windows 7 Ultimate 64bit
      • Comment:
      • Not so good, but I still love it ^.^
      • ISP:
      • BTCL
      • Download Speed:
      • 2mbps
      • Upload Speed:
      • 1mbps
    Aunee's Avatar
    Join Date
    May 2010
    Location
    বলবো না!
    Posts
    194

    Default Re: Nvidia purposefully hobbles PhysX on the CPU ................Real World Tech proves Nvidia's de-optimizations

    Quote Originally Posted by avas911 View Post
    actually এগোতে পারতেছে না...ওদের ১২টা বাজতেছে...ঃ(
    এই সব এর জন্যই আমি ATI এর gfx card কিনেছি...
    খাই-দাই গান গাই, তাইরে নাইরে না!

  6. #6
    Member
    • faisal.amir.bd's Gadgets
      • Motherboard:
      • ABIT IP35 Dark Raider
      • CPU:
      • Intel Core 2 Duo [email protected]
      • RAM:
      • 2GB Transcend AxeRam DDR-2 800+ @816MHz
      • Hard Drive:
      • 820GB (320GB Hitachi, 500GB Samsung)
      • Graphics Card:
      • Gigabyte Radeon HD5770 1GB DDR5 (Batmobile Cooler)
      • Display:
      • ASUS MS228H 1080p LED backlit
      • Sound Card:
      • Creative Sound Blaster Live! 5.1
      • Speakers/HPs:
      • Creative Inspire 5200 (5.1)
      • Controller:
      • XBOX 360 wired gamepad
      • Power Supply:
      • Thermaltake TR2 600W
      • UPS:
      • Rahim Afrooz 1000VA
      • Operating System:
      • Windows 7x86
      • ISP:
      • Link 3
      • Download Speed:
      • 180kBps [1.5Mbps]
      • Upload Speed:
      • 180kBps [1.5Mbps]
      • Console:
      • 2
    faisal.amir.bd's Avatar
    Join Date
    Mar 2008
    Location
    Atlanta
    Posts
    1,590

    Default Re: Nvidia purposefully hobbles PhysX on the CPU ................Real World Tech proves Nvidia's de-optimizations

    While I am currently sick of nVidia for their strong-arm marketing tactics, I think everyone should accept this article with a little bit of suspicion. Semiaccurate.com has always been a well-known nVidia hater, and their anti-nVidia information has been semi-accurate, at best. Charlie Demerjian has a personal beef against nVidia, and he has a habit of using too much techno-babble to confuse everyone and exerting his own opinions. Having said that though, there might be some truth in this article.

    Ageia, the father of PhysX was started up in 2002... its a safe bet that the originating partners were working on PhysX quite some time prior to filing there articles of organization for the corporation. This would put the origination time closer to the 1999 x87 SSE changover, giving a possible explanation as to the use of the older Floating point instructions. (Especially since as you remember, SSE wasn't exactly welcomed with open arms upon introduction. It took a couple years for enough SSE supporting chips to flood the market that it was worth it for developers to start using the new instruction set without fear of alienating a large portion of the non-SSE supporting user base.)

    I find the use of x87 to be completely plausible from Ageia's standpoint... back then it was much more familiar and less time consuming to code for than SSE (And time was something Ageia was racing against to get it's first add on cards out the door.. I seriously doubt they had the financial or development resources needed to retool their design for SSE if they wanted to...)

    Now, since nVidia purchased them 2.5 years ago, and allegedly ported their technology over to CUDA, I would go as far as to call them lazy for not updating the instructions used for the task.

    I guess you could say nVidia didn't want to spend time and money for the switch to SSE, since that would hurt their profitability.... "intelligent laziness"...
    Last edited by faisal.amir.bd; July 9th, 2010 at 02:58.

  7. #7
    Forum Staff
    • dipanzan's Gadgets
      • Motherboard:
      • Gigabyte Z87 HD3
      • CPU:
      • Intel Core i5 4670k
      • RAM:
      • Corsair Dominator 16GB 1600
      • Hard Drive:
      • Crucial M4 128GB, Western Digital 1TB Blue, My Passport 2TB
      • Graphics Card:
      • HIS HD5850
      • Display:
      • Dell P2212H
      • Sound Card:
      • Asus Xonar DGx
      • Speakers/HPs:
      • Sennheiser HD598
      • Keyboard:
      • Filco Majestouch 2 TKL Ninja Reds
      • Mouse:
      • Mionix Avior 7000, Steelseries XAI
      • Power Supply:
      • Corsair HX650 v2
      • Operating System:
      • Windows 8.1 Pro x64
      • ISP:
      • Link3 :: Linksys WRT54GL w/ DD-WRT
      • Download Speed:
      • 64-128KB/s
      • Upload Speed:
      • 64-128KB/s
    dipanzan's Avatar
    Join Date
    Mar 2009
    Location
    Kalabagan, Dhaka
    Posts
    7,026

    Default Re: Nvidia purposefully hobbles PhysX on the CPU ................Real World Tech proves Nvidia's de-optimizations

    yup, Avas bhaiya if the news is from SemiAccurate, then you can safely bet that the given info is also Semi Accurate like the owner!
    This guy: Charlie Demerjian, is an ANTI nVIDIA! He's fairly known for this type of news against nV.

  8. #8
    Moderator
    • minitt's Gadgets
      • Motherboard:
      • Asus Sabertooth Z170 S
      • CPU:
      • Intel Core i7 6700K
      • RAM:
      • Corsair Vengence 3200mhz White Led
      • Hard Drive:
      • Samsung 830 (128GB) +1TB HDD
      • Graphics Card:
      • GTX 780 With Titan HS
      • Display:
      • Samsung KS8000 55"
      • Sound Card:
      • On board
      • Speakers/HPs:
      • Yamaha Reference Monitor
      • Keyboard:
      • Logitech G710+
      • Mouse:
      • Logitech G900 Chaos Spectrum
      • Power Supply:
      • Antec HCG-620M Modular
      • Optical Drive:
      • Nai
      • UPS:
      • dont need 1
      • Operating System:
      • Genuine Windows 10
      • ISP:
      • Shaw Cable
      • Download Speed:
      • 94Mbps
      • Upload Speed:
      • 20mbps
    minitt's Avatar
    Join Date
    Feb 2008
    Location
    dhaka
    Posts
    4,055

    Default Re: Nvidia purposefully hobbles PhysX on the CPU ................Real World Tech proves Nvidia's de-optimizations

    its not only the news that is of interest rather its the research conducted by another party. As for NV physix its about time that they let go of it just wait n see.

  9. #9
    Member
    • Dissent's Gadgets
      • Motherboard:
      • Asus P6T
      • CPU:
      • Intel i7-920
      • RAM:
      • A-Data DDR3 6GB
      • Hard Drive:
      • 3TB Samsung (3 HDDs)
      • Graphics Card:
      • XFX HD 4890
      • Display:
      • Asus VH222D
      • Sound Card:
      • Asus Xonar DX
      • Speakers/HPs:
      • Generic Headset | Microlab FC550
      • Keyboard:
      • A4Tech Generic
      • Mouse:
      • Logitech MX 518
      • Controller:
      • Genius Speed Wheel 3 Vibration & MaxFighter F-23U
      • Power Supply:
      • Thermaltake Toughpower 650W
      • Optical Drive:
      • Samsung
      • USB Devices:
      • Transcend 16GiB USB3
      • UPS:
      • None atm
      • Operating System:
      • Windows 7 | Arch Linux
      • ISP:
      • Qubee
      • Download Speed:
      • 62 KiB
      • Upload Speed:
      • 16 KiB
    Dissent's Avatar
    Join Date
    Aug 2009
    Location
    Dhaka
    Posts
    1,215

    Default Re: Nvidia purposefully hobbles PhysX on the CPU ................Real World Tech proves Nvidia's de-optimizations

    Quote Originally Posted by dipanzan View Post
    yup, Avas bhaiya if the news is from SemiAccurate, then you can safely bet that the given info is also Semi Accurate like the owner!
    This guy: Charlie Demerjian, is an ANTI nVIDIA! He's fairly known for this type of news against nV.
    'Fairly' is an understatement. He has made himself and his site infamous through his biased articles.

    On-topic, I remember coming across something similar to this a few weeks ago, can't remember the URL but I'm taking a guess that it's where our good friend Charlie got the initial info and obviously added his own biased crap.
    Last edited by Dissent; July 10th, 2010 at 17:17.
    Quote Originally Posted by CvP View Post
    unless they introduce PoP/Mirror Edge styles with lara croft in bikini (gfx on par or better than AniMatrix), this game is hopeless =,='

Similar Threads

  1. Replies: 6
    Last Post: June 1st, 2010, 11:02
  2. nVidias VP for CUDA and PhysX moves to AMD?
    By SHTN in forum Tech News
    Replies: 3
    Last Post: May 31st, 2010, 18:14
  3. Replies: 17
    Last Post: October 24th, 2009, 11:25
  4. Replies: 27
    Last Post: October 5th, 2009, 14:21
  5. NVIDIA Fermi PhysX demo screenshots
    By Albert SX65 in forum Tech News
    Replies: 0
    Last Post: October 3rd, 2009, 00:52

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Page generated in 0.25282 seconds with 14 queries.