Thanks to Marek Olšák for having a backup copy of my Git repository online! The hard drive containing much of my personal code which was on people.freedesktop.org (until those directories were lost) is half way around the world.
Revenge now has a new home on http://gitorious.org/omcfadde/revenge.
I have bumped the version to 2.0.0, which introduces some minor configure.ac fixes: mostly PKG_CHECK_MODULES for libpci, sdl, and zlib. I have also updated email addresses and revenge.sh for non-developers.
Honestly I do not expect this code to get much interest now that we have documentation from AMD; but it's useful for historical/nostalgic reasons.
If I were to do it over again today: I would start with the kernel MMIO tracer (which would deal with the fglrx kernel module) then extend this to handle dumping MMIO access from a userspace process too. The kernel is the perfect place to do so, and would be far more reliable than userspace.
If you have any questions or bug reports, feel free to ask them here and I will try to provide you with timely answers/fixes.
Sunday, February 20, 2011
Saturday, February 19, 2011
Math function micro-optimization...
Preamble for planet.freedesktop.org
Sorry about the poor formatting on planet.freedesktop.org; it seems it and BlogSpot don't quite get along, therefor you won't see any color hilights. It looks much better (and easier to read) on my actual blog page, honest!
Updated version includes float-to-int optimization and comments; sorry if this bumps this rather long post to the top again; this is not my intention. planet.freedesktop.org admins: is there some way to disable bumping when a post is updated? (Perhaps selectively, in case the bump is important. e.g. updated dates for an event.)
This analysis was performed using a modified version of Chris Lomont's inverse square-root testing code. The accompanying publication is worth reading before looking at any of this data.
I've started looking into whether there would be any performance difference in a few optimized math functions should -fstrict-aliasing be enabled. I did not believe strict-aliasing would have much of an effect on these optimized functions (and it turns out I was correct) but the benefit is seen when compiling other code which includes these inline functions.
Without strict-aliasing compatibility, including the header file containing the incompatible functions/macros taints the entire file, meaning you cannot use -fstrict-aliasing where it may be helpful for your general code.
Here are the results for the standard 1.0 / sqrt(x) frequently used in graphics engines. Even though today's renderers typically use carefully crafted SIMD functions for the critical path, this is still useful for quickly normalizing vectors in game code, etc.
The Lomont version of the function is a tiny bit faster and a tiny bit more accurate, but nothing to write home about.
Clearly it can be seen that this micro-optimization is an excellent for x86 and x86_64. Don't try it on ARM; it's far slower than just taking the hit on 1.0 / sqrt(x)
I don't know whether this optimization could be modified for ARM; any assembly experts out there?
The absolute value function is mostly used for comparisons (e.g. fabs(y - x) > epsilon and some other specialized functions: finding on which side of a plane an AABB resides, it's distance from said plane, AABB radius, etc. Therefor it's useful to optimize this function where possible...
These macros/functions are used when you want to know the sign of a float (i.e. is the value positive or negative) without performing any comparison (for performance reasons.) It seems that the strict-aliasing versions perform about identical to the macros.
Don't use "d = (int) f" if you want fast code. fld and fistp work nicely on x86 and x86_64.
Whether this makes any huge difference in frames-per-second is debatable, really I had a bit of time and was bored. :-) Anyway, I wouldn't say anything until testing under real-world conditions.
It does look like the bit-masking fabs can be thrown away, though, and the fast float-to-int is a major win (although beware of possible rounding differences.)
Sorry about the poor formatting on planet.freedesktop.org; it seems it and BlogSpot don't quite get along, therefor you won't see any color hilights. It looks much better (and easier to read) on my actual blog page, honest!
Updated version includes float-to-int optimization and comments; sorry if this bumps this rather long post to the top again; this is not my intention. planet.freedesktop.org admins: is there some way to disable bumping when a post is updated? (Perhaps selectively, in case the bump is important. e.g. updated dates for an event.)
This analysis was performed using a modified version of Chris Lomont's inverse square-root testing code. The accompanying publication is worth reading before looking at any of this data.
I've started looking into whether there would be any performance difference in a few optimized math functions should -fstrict-aliasing be enabled. I did not believe strict-aliasing would have much of an effect on these optimized functions (and it turns out I was correct) but the benefit is seen when compiling other code which includes these inline functions.
Without strict-aliasing compatibility, including the header file containing the incompatible functions/macros taints the entire file, meaning you cannot use -fstrict-aliasing where it may be helpful for your general code.
Here are the results for the standard 1.0 / sqrt(x) frequently used in graphics engines. Even though today's renderers typically use carefully crafted SIMD functions for the critical path, this is still useful for quickly normalizing vectors in game code, etc.
The Lomont version of the function is a tiny bit faster and a tiny bit more accurate, but nothing to write home about.
Clearly it can be seen that this micro-optimization is an excellent for x86 and x86_64. Don't try it on ARM; it's far slower than just taking the hit on 1.0 / sqrt(x)
I don't know whether this optimization could be modified for ARM; any assembly experts out there?
Timing Exact function
1752 ms used for 100000000 passes, avg 1.752e-05 msTiming Carmack function
463 ms used for 100000000 passes, avg 4.63e-06 ms
Timing Carmack function (strict-aliasing)
455 ms used for 100000000 passes, avg 4.55e-06 ms
Timing Lomont function
453 ms used for 100000000 passes, avg 4.53e-06 ms
Timing Lomont function (strict-aliasing)
455 ms used for 100000000 passes, avg 4.55e-06 ms
The absolute value function is mostly used for comparisons (e.g. fabs(y - x) > epsilon and some other specialized functions: finding on which side of a plane an AABB resides, it's distance from said plane, AABB radius, etc. Therefor it's useful to optimize this function where possible...
However, apparently it's quite a bit faster to just call libc's fabsf function! I saw this originally in the Quake 3 Arena source code, so maybe things were different with the compilers and hardware of the time.Timing Exact fabsf function
268 ms used for 100000000 passes, avg 2.68e-06 msTiming Bit-Masking fabsf function
304 ms used for 100000000 passes, avg 3.04e-06 ms
Timing Bit-Masking fabsf function (strict-aliasing)
305 ms used for 100000000 passes, avg 3.05e-06 ms
These macros/functions are used when you want to know the sign of a float (i.e. is the value positive or negative) without performing any comparison (for performance reasons.) It seems that the strict-aliasing versions perform about identical to the macros.
Timing Exact float sign bit not set function
327 ms used for 100000000 passes, avg 3.27e-06 msTiming FLOATSIGNBITNOTSET macro
313 ms used for 100000000 passes, avg 3.13e-06 ms
Timing Bit-Masking float sign bit not set function (strict-aliasing)
312 ms used for 100000000 passes, avg 3.12e-06 ms
Timing Exact float sign bit set function
342 ms used for 100000000 passes, avg 3.42e-06 msTiming FLOATSIGNBITSET macro
305 ms used for 100000000 passes, avg 3.05e-06 ms
Timing Bit-Masking float sign bit set function (strict-aliasing)
305 ms used for 100000000 passes, avg 3.05e-06 ms
Don't use "d = (int) f" if you want fast code. fld and fistp work nicely on x86 and x86_64.
These measurements were taken on my laptop with an Intel(R) Core(TM)2 Duo CPU P9500 @ 2.53GHz processor and the test program compiled with gcc version 4.4.5 (Debian 4.4.5-6)Timing Exact float-to-int function
1252 ms used for 100000000 passes, avg 1.252e-05 msTiming Fast float-to-int function
336 ms used for 100000000 passes, avg 3.36e-06 ms
Done. By Chris Lomont 2003. Modified by Oliver McFadden 2011
Whether this makes any huge difference in frames-per-second is debatable, really I had a bit of time and was bored. :-) Anyway, I wouldn't say anything until testing under real-world conditions.
It does look like the bit-masking fabs can be thrown away, though, and the fast float-to-int is a major win (although beware of possible rounding differences.)
Labels:
Analysis,
Development
Looking for a copy of my Revenge tool
Dear Lazyweb,
If anyone happens to have a copy of the Revenge (Radeon Reverse-Engineering Tool) Git repository, tarball, or code in any format, please comment on this post.
I know one released tarball was named "revenge-1.0.1.tar.gz", but it is unfortunately lost due to the home directories being lost on people.freedesktop.org. I believe there were newer versions, too.
I am reasonably (~90%) sure that I have the Git repository stored on one of my computers, unfortunately the computer in question is currently half the world away, and not online.
Perhaps this post will serve as a reminder to backup your code in more than one location (excluding your workstation.) Yeah, my bad. :-(
If anyone happens to have a copy of the Revenge (Radeon Reverse-Engineering Tool) Git repository, tarball, or code in any format, please comment on this post.
I know one released tarball was named "revenge-1.0.1.tar.gz", but it is unfortunately lost due to the home directories being lost on people.freedesktop.org. I believe there were newer versions, too.
I am reasonably (~90%) sure that I have the Git repository stored on one of my computers, unfortunately the computer in question is currently half the world away, and not online.
Perhaps this post will serve as a reminder to backup your code in more than one location (excluding your workstation.) Yeah, my bad. :-(
Underwater ultrasonic data modulation?
I'm currently looking for a bit of a combination hardware and software project to fill the boredom, so I've been thinking about underwater ROV's. Traditionally these use a surface tether for command communication (presumably with some basic protocol) and a feed from the camera and sensors.
I'm wondering what kind of distance I could get with modulated ultrasonic transducers?
I am quite sure how to design a suitable protocol; in fact I can reuse a lot of the bit-message code used in Quake 3 (there are lots of gems in there.) That provides me with compact messages, and with Huffman compression and an optimized table (based on either simulated or real-world packet capture) the compression ratio becomes pretty good. I think it would even be possible to send the surface a low-resolution/low-FPS video feed, while recording the high-resolution real-time feed to a solid-state drive.
The challenging part would be modulating the data; I have no idea how to choose a carrier frequency and modulation scheme for low-frequency (longer range, less ambient noise) ultrasonic transducers.
Furthermore, assuming the ROV and surface use the same frequency, the link would be half-duplex. This should be fine in theory as the protocol could be designed around this, but ultrasonic is still sound, so picking up an echo is a definite possibility (less so in open water.) I don't see this being too much of a problem (famous last words) because the protocol would be designed to be inherently unreliable: packet sequence number checking, CRC check, sanity check on values.
Far more concerning would be the potential to output too much power and thus be damaging to divers ears underwater. I guess that would just be a matter of capping the output power at the maximum exposure limit for the length of the dive, minus an N percent safety margin.
At least those Full Face Mask underwater communication systems seem to work very well for voice, and do not have any exposure limits that I know about. Quick Google search shows one rated for "50 to 500 meters depending on Sea Conditions and noise levels." Of course, this is voice which is much higher bandwidth than a simple command stream and perhaps 640x480 compressed 1 FPS video.
Anyone out there know about modulation schemes? There's a little on Google, but not too much; possibly I don't know what I'm looking for, though.
Just a random brain-dump idea I've been thinking over; at least writing this post breaks the boredom somewhat and might even motivate me to work on code for the sinking ship that is Nokia. Last I checked the stock was down ~20% and dropping since Stephen Elop's announcement. Ah, back to my general state of pessimistic realism.
I'm wondering what kind of distance I could get with modulated ultrasonic transducers?
I am quite sure how to design a suitable protocol; in fact I can reuse a lot of the bit-message code used in Quake 3 (there are lots of gems in there.) That provides me with compact messages, and with Huffman compression and an optimized table (based on either simulated or real-world packet capture) the compression ratio becomes pretty good. I think it would even be possible to send the surface a low-resolution/low-FPS video feed, while recording the high-resolution real-time feed to a solid-state drive.
The challenging part would be modulating the data; I have no idea how to choose a carrier frequency and modulation scheme for low-frequency (longer range, less ambient noise) ultrasonic transducers.
Furthermore, assuming the ROV and surface use the same frequency, the link would be half-duplex. This should be fine in theory as the protocol could be designed around this, but ultrasonic is still sound, so picking up an echo is a definite possibility (less so in open water.) I don't see this being too much of a problem (famous last words) because the protocol would be designed to be inherently unreliable: packet sequence number checking, CRC check, sanity check on values.
Far more concerning would be the potential to output too much power and thus be damaging to divers ears underwater. I guess that would just be a matter of capping the output power at the maximum exposure limit for the length of the dive, minus an N percent safety margin.
At least those Full Face Mask underwater communication systems seem to work very well for voice, and do not have any exposure limits that I know about. Quick Google search shows one rated for "50 to 500 meters depending on Sea Conditions and noise levels." Of course, this is voice which is much higher bandwidth than a simple command stream and perhaps 640x480 compressed 1 FPS video.
Anyone out there know about modulation schemes? There's a little on Google, but not too much; possibly I don't know what I'm looking for, though.
Just a random brain-dump idea I've been thinking over; at least writing this post breaks the boredom somewhat and might even motivate me to work on code for the sinking ship that is Nokia. Last I checked the stock was down ~20% and dropping since Stephen Elop's announcement. Ah, back to my general state of pessimistic realism.
Labels:
Development
Subscribe to:
Posts (Atom)