Search This Blog:

1.19.2010

Adapting a x16 lane PCIE video card into a x1 lane PhysX card.

I’ve played a few games (Mirror’s Edge, Batman: AA) this year that took advantage of Nvidia’s hardware PhysX processing in the video card via CUDA.  I started considering ways to recycle my old video card to utilize it as a PhysX processor.  My Gigabyte GA-P35-DS3L doesn’t support SLI and only has x1 lane PCIE slots open so I didn’t have anywhere to put my older video card.  I found various hacks and mods online where people had adapted x16 cards into x1 for video output.  But these mods either physically modify the slot on the motherboard, the connector tab on the video card.  I found the Startech PEX1TO16 lane adapter, but almost no reviews for it.  I decided it was worth a shot, and less likely to damage my motherboard so I started another of my experiments.  Would the adapter work?  If it works, can you get decent PhysX performance out of the reduced bandwidth x1 lane connection?
IMG_2477
First, a word of warning.  This adapter will raise the height of your video card, so it won’t fit normally and screw into the backplane of the computer.  My XFX 8600GTS has empty space at the top of the card for the S-Video output, so it still fits on my system. I just had to find a way to anchor the card.  I accomplished this using zip ties.  I’m looking for a properly threaded screw to make this more permanent. 
You’re also going to need a lot of cooling for two video cards, especially if they are close together. My system has a lot of ventilation, so I’m ok in this area.  I also have enough power supply for both cards.
Here is the adapter:
IMG_2460 



My system before and after install:IMG_2462


IMG_2463
It took some experimenting to figure out which port was the primary display after adding in the new card.  Once I had the DVI cable on the right output, I just had to update my Nvidia drivers to get both cards to show up. 
greenshot_2010-01-15_20-36-08 
You can choose which card to be the PhysX Processor in the Nvidia control panel.

greenshot_2010-01-15_20-55-16
Benchmarks:
I ran these tests using a Core 2 Duo 2.4GHz CPU, 2GB of DDR2/667 RAM, and Vista Home Premium 32-bit.  The primary graphics card is a 512MB 9800GTX+ (same core as a GT250, but less RAM), and the PhysX processor is a 8600GTS.  I rendered all the graphics on the 9800 GTX+, and switched PhysX duties between it and the 8600 for testing.  I did a lot of the testing using Batman: Arkham Asylum’s built-in test. 
Picture 2
The results are a bit confusing.  Using the add on card, seems to raise your maximum frames per second,  but can cause a slight dip in your average FPS.  I’m not sure if this is an issue with a bottleneck in my PCIE 1.0 graphics system, or a function of these applications implementation of PhysX.  In general the average frame rate stayed close to the baseline score.  My personal impression from playing Batman: AA was that the game ran smoother in PhysX heavy areas when I used the secondary card.  The graphics looked about the same, but gameplay felt more fluid in foggy or physics-object heavy areas.  I think the lower frame rate may be due to some bugs in the benchmark.  I noticed that when levels start to render, the frame rate seems to drop for 1-2 seconds as the physics objects draw in.  Once the level finishes “set up” things seem to run fast again. 
greenshot_2010-01-10_19-43-47
I will note that this combination of cards could not run the Scarecrow hallucination levels under High PhysX settings.  Those levels would slow to a crawl when they started up.  Everything works fine on Normal.  I think the High settings may require a true SLI setup.

No comments:

Post a Comment