The Mother Board

Motherboards.org forums. Free tech support, motherboard ID, and more.
It is currently Tue Aug 14, 2018 12:04 pm

All times are UTC - 8 hours




Post new topic Reply to topic  [ 5 posts ] 
Author Message
PostPosted: Tue Sep 09, 2008 6:44 pm 
Offline
Black Belt 1st Degree
Black Belt 1st Degree

Joined: Sat Oct 28, 2000 12:01 am
Posts: 1962
Location: Oklahoma, USA
If you are currently using the GPU2 client in conjunction with one or more Nvidia cards, chances are you recently noticed quite a big drop in your PPD. Well, you have the new batch of GPU WU's to thank for that. It seems the work servers are now distributing more complex proteins that the Nvidia cards don't like, but the ATI cards will chew right through. All this in addition to the new WU's being worth only 430 points compared to 480 for the ones we have been crunching.

Due to architecture differences, Nvidia cards will blaze right through the 576 atom WU's such as Project 5015, whereas ATI cards will chug along quite nicely with the new, larger 1254 atom WU's such as Project 5019. The reason for this is, while the Nvidia cards feature faster, more complex (but fewer) shader processors, the ATI cards use a less efficient shader method but have a lot more shader processors. The increased overhead kills Nvidia cards on the larger WU's, but they can process the smaller WU's much faster than ATI cards.

From my experience, since I have started to crunch the new WU's, my GPU temps have gone up across the board and each GPU is now getting 3375 PPD instead of the usual 5120. That's a not-so-insignificant 34% drop in PPD per GPU. Curiously, the smaller WU's have consistently utilized 25% of the CPU core I have the client tied to, but the larger WU's are only utilizing between 3% to 11% of CPU time.

_________________
Image


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Thu Sep 11, 2008 1:55 pm 
Offline
Black Belt 5th Degree
Black Belt 5th Degree

Joined: Tue Jul 10, 2001 12:01 am
Posts: 5491
Location: Flintshire, U.K
Yes, I'm getting these too. However, I've just read that you can minimize the points lost by running a normal WU on the unused core of your CPU....but I'm not yet in a position to confirm this.

Quote:
Curiously, the smaller WU's have consistently utilized 25% of the CPU core I have the client tied to, but the larger WU's are only utilizing between 3% to 11% of CPU time.


That seems to confirm the post I read on the Community forum. Time for some experimentation I think :)


Pete

_________________
Image


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Thu Sep 11, 2008 3:14 pm 
Offline
Black Belt 1st Degree
Black Belt 1st Degree

Joined: Sat Oct 28, 2000 12:01 am
Posts: 1962
Location: Oklahoma, USA
Pette Broad wrote:
...I've just read that you can minimize the points lost by running a normal WU on the unused core of your CPU.

.....Time for some experimentation I think :)


Yeah, I was thinking the same thing here. I will try running a standard WU on the core that I have tied to the GPU client and see what kind of impact it has on the smaller GPU WU's as well as the larger GPU WU's.

I assume since the GPU client has been running on that particular core for a while, if I start a normal WU on it, the GPU WU will get most of the cycles and the normal WU will get what's left? Will just have to see.

_________________
Image


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Thu Sep 11, 2008 3:37 pm 
Offline
Black Belt 1st Degree
Black Belt 1st Degree

Joined: Sat Oct 28, 2000 12:01 am
Posts: 1962
Location: Oklahoma, USA
Ok, just tested out running a GPU work unit and a normal work unit on the same core. Result: Absolutely no time per frame increase on the GPU work unit. Adding a normal CPU WU to the core didn't effect the processing time on the GPU WU at all.

Now I will have to see if the normal WU is going to take longer to process. That might be harder to judge since I have never crunched this particular WU with a core all to itself before. Doesn't really matter though, it's a PPD gain either way.

EDIT: Looking back over my F@H logs, I found a point of reference on the normal WU that I am crunching at the same time as my GPU WU. The normal WU is taking about 35% longer to complete as opposed to having it's own dedicated core. The GPU work unit is still unaffected. However, that may change if the normal WU client is still processing when the GPU client finishes a WU and request another one. The normal WU client might take back all of the CPU cycles and starve the GPU client on the next run.

_________________
Image


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Fri Sep 12, 2008 1:17 am 
Offline
Black Belt 5th Degree
Black Belt 5th Degree

Joined: Tue Jul 10, 2001 12:01 am
Posts: 5491
Location: Flintshire, U.K
dharbert wrote:
However, that may change if the normal WU client is still processing when the GPU client finishes a WU and request another one. The normal WU client might take back all of the CPU cycles and starve the GPU client on the next run.



That would seem to be a exactly what happens.

EDIT...I guess that if you went into config and set the core for normal WU's to somewhere around 65% then this would get round it.

I should add that as of this morning I am no longer getting the 430 pointers. There has been a glut of EUE's on these, usually instant (5511-5513 are especially bad), and I think they may have withdrawn them until a new core is released. Apparently that's in testing now. Also, there are some new 480 pointers in Beta as of this morning :)

Pete

_________________
Image


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 5 posts ] 

All times are UTC - 8 hours


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group