Sign in to follow this  
Followers 0
maxdreamland

PixelGetColor considerable slower on ATI vs. NVidia?

13 posts in this topic

Hi,

i recently switched from a 64 Bit Vista with an NVidia 8800GT graphics adapter to a 64 Bit Windows 7 with an ATI 5770 graphics adapter. And i noticed that the PixelGetColor() function runs considerably slower on that new setup. I'm talking about a factor of at least 20 times slower on average.

I have a script that makes intense use of PixelGetColor() and this script originally had a "roundtrip" time of about 20ms per loop. On the new setup the same script takes about 500ms per loop. which in my case makes the script unusable.

After some testing i found out that it is only slower when the graphics adapter runs in 32Bit color depth (which is common these days), when switched to 16Bit color depth speed is back to normal values. But 16 Bit color depth is not applicable for my script/usecase, so it doesn't alleviate my problem.

I'm posting this because i hope for some inspiration or reasoning why this difference exists in the first place and hopefully how it can be avoided.

I already implemented some caching, so when i repeatedly access the same pixel, i only read it once with PixelGetColor. This helped to bring the roundtrip time of the script down from 3500ms to the above mentioned 500ms, but this is still too slow.

Any ideas? Does this qualify for a posting in the bug forum?

regards

Max

Share this post


Link to post
Share on other sites



#2 ·  Posted (edited)

If I remember correctly, the culprit was the GetDC() call to the desktop which took a long time. You can now specify a window handle, and it will take a much shorter time.

There is no point in making a bug report, this is a known issue and it will not be fixed because it cannot be fixed (Edit: The problem is in Windows). The provided solution is to specify a window handle (last parameter).

However, when you are doing many PixelGetColor operations on the same area a lot of times, then it may be worth looking into this: http://www.autoitscript.com/forum/index.php?showtopic=63318 You'll have a lot more control over what you do and what you won't.

Edited by Manadar

Share this post


Link to post
Share on other sites

Your link certainly looks promising. i'll give that a try (it's similar to what i tried to achieve with my caching).

by the way...i already use a window handle in my PixelGetColor calls, so i presume that can't be the issue.

What confuses me is that it was fast in both 32 and 16 bit with the nvidia card and vista. It's still fast in 16 bit with the ati card and windows 7. But it is slow in 32 bit ati/windows 7. What is that special "feature" of the ATI 32 Bit mode that makes it that slow? Or has it something to do with Windows 7 maybe?

Maybe it's something in the driver i can disable/enable?

To stress my point that the speed difference is significant i wrote this little "benchmark":

Local $pixelGetColorTime, $timer, $xn = 20, $yn = 20, $wnd=WinGetHandle("[ACTIVE]")
For $x = 1 To $xn
            For $y = 1 To $yn
                        $timer = TimerInit()
                        PixelGetColor( $x, $y, $wnd)
                        $pixelGetColorTime = $pixelGetColorTime + TimerDiff($timer)
            Next
Next
ConsoleWrite("avg="&($pixelGetColorTime/($xn*$yn))&@CRLF)

On my setup this yields in 32 Bit: avg=18.9999965519038 (milliseconds per pixel get that is) and in 16 Bit: avg=0.0146994256237989

Thats more than 1200 times slower! So there must be something wrong, somewhere, right?

regards

Max

Share this post


Link to post
Share on other sites

#4 ·  Posted (edited)

I use the _GDIPlus_BitmapLockBits() method which according to this example is 20x faster than the "GetPixel" method.

I haven't done any comparisons but the method works well for me. To sample about 500 pixels (off screen) and do basic stats (mean & stdev) on RGB values separately (ie. 3x calculations) takes about 35ms.

QED

EDIT

Just took your benchmark challenge and got roughly comparable results using GDIPlus 0.0154 ms per pixel-get (1 million iterations). ATI 4670 (Oct 2009 driver), Windows 7 (x86), 32-bit color, Q6600 (2.4 GHz) processor. I guess the 20x speed difference claim is disputable - or does this mean my computer is 20x slower than yours? :) Hope that helps.

Edited by QED

Share this post


Link to post
Share on other sites

I don't think AutoIt runs well under 16 bit, if it runs at all...

Hi,

i think you mixed up color settings (32/16 Bit) of graphics and os bit version.

;-))

Stefan

Share this post


Link to post
Share on other sites

#7 ·  Posted (edited)

Ugh, topic is confusing. That was it. Thank you.

P.S. I didn't realize some people STILL use 16 bit.

Edited by Manadar

Share this post


Link to post
Share on other sites

To clear this up...

i'm using a 64 bit operating system (previously vista, now windows 7) and usually i run the 32 bit color depth resolution. i only switched to 16 bit during my debugging to find out why it was so slow.

so the fact that it is fast in 16 bit color depth really has no practical use to me, it's just for comparison and the sake of argument.

so my point is: why is there a speed difference factor of more than 1200 in 32 Bit color vs 16 Bit color, when it was equally fast in both 32 and 16 bit before i switched to the new card and windows.

regards

Max

Share this post


Link to post
Share on other sites

Just took your benchmark challenge and got roughly comparable results using GDIPlus 0.0154 ms per pixel-get

first of all, thank you for taking the time and do a comparison run. Your results are in line with my results on my old computer (vista 64 bit with NVidia card). i still have that machine here and just did the test in both color depths. both yield identical results around the 0.015 ms mark. no decernable speed difference between color depths.

The same value of around 0.015ms i get on the new computer (windows 7 64 bit with ATI card), but only in (the useless) 16 bit color depth. as soon as i switch to the (common) 32 bit color depth, speed slows by a factor of more than 1200 to something like 20ms per pixel get.

regards

Max

Share this post


Link to post
Share on other sites

The common mass blames this on Aero, although the real problem is the underlying engine, DWM. You see, having DWM activated forces GDI to be processed by your cpu, and is thus many times slower, as you have found out.

You can disable DWM in many ways (try googling it), one being changing to 16-bit colors (notice how Aero disappears?), but you can also easely do it from the script, see my example HERE.

Share this post


Link to post
Share on other sites

My hat is off to you! Disabling Aero did the trick, now everything is back to being fast.

I'm sincerely thankful for your help!

regards

Max

Share this post


Link to post
Share on other sites

Disabling Aero did the trick, now everything is back to being fast.

Do you really want to live without Aero? :)

Using PixelGetColor I can confirm the behaviour you observed with your ATI card. In my case however it is not enough to just turn Aero off, I have to change to 16-bit color mode to get "normal" speed. This huge speed difference has serious implications for all ATI users. There are many threads here where people have reported very slow performance using PixelGetColor.

If this is a DWM issue then shouldn't the NVidia card be affected the same way? Hopefully someone can confirm this.

QED

Share this post


Link to post
Share on other sites

@QED

You need to turn off DWM, not Aero, to get "XP"-speed on PixelGetColor() and others. Please see my earlier post.

As to the Nvidia card, he must have DWM off for some reason (kinda strange, 8800GT has far more than enough power for Aero)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0