F# microbenchmark study

0

Recently, there was some discussion about a set of microbenchmarks  reported in a study called Clash of the Lambdas which compared a simple stream/sequence benchmark using Java 8 Streams, Scala, C# LINQ and F#. I am learning F# and as a learning exercise I decided to re-implement one of the benchmarks (Sum of Squares Even) myself in F# without referring to the code provided by the authors.

The source of my implementation can be found on Bitbucket and binaries are also provided.  My interest was to test/compare various F# implementations and not cross-language comparison.  I implemented it in four different ways:

  • Imperative sequential for-loop
  • Imperative parallel version using Parallel.For from Task Parallel Library
  • Functional sequential version using F# sequences
  • Functional parallel version using F# PSeq from FSharp.ParallelSeq
  • UPDATE: I added a functional version using the Nessos Streams package as suggested by Nick Palladinos on twitter

I compiled using VS 2013 Express and F#3.1 with “Release” settings, Any CPU (32-bit not preferred) and ran it on my machine on 3 different CLR implementations:  MS CLR from .net SDK 4.5.2 running on Windows 8.1, MS CLR RyuJIT CTP4 and finally on OpenSUSE 13.1 using Mono 3.4 (sgen GC, no LLVM).

The results are as follows:

Imperative Functional
Sequential Parallel Sequential Parallel Streams
MS CLR 17 8 172 81 45
MS RyuJIT CTP4 18 7 168 76 44
Mono 88 23 240 797 97

Some observations for this microbenchmark:

  • Imperative version is far faster than the functional version, but the functional version was shorter and clearer to me. I wonder if there is some opportunity for compiler optimizations in the F# compiler for the functional version, such as inlining sequence operations or fusing a pipeline of operations where possible.
  • MS RyuJIT CTP4, which is the beta version of the next-gen MS CLR JIT, is performing similar to the current MS CLR. This is good to see
  • Mono is much slower than the MS CLR. Also, it absolutely hates F# parallel sequences for some reason.  I guess I will have to try and install Mono with LLVM enabled and then check the performance again.
  • Streams package from Nessos looks to be faster than F# sequences in this microbenchmark. It is currently sequential only but performs much faster than even PSeq.

These observations only apply to this microbenchmark, and probably should not be considered as general results.  Overall, it was a fun learning experience, especially as a newcomer to both F# and the .net ecosystem. F# looks like a really elegant and powerful language and is a joy to write.  There is still a LOT more to learn about both. For example, I am not quite clear what the best way to distribute .net projects as open-source. Should I distribute VS solution files? I am more used to distributing build files for CMake, Make, scons, ant etc. and looking more into FAKE. NuGet is also nice-ish though appears to be useful but not very powerful (eg: can’t remove packages) and merits further investigation.

twittergoogle_plusredditmail

Getting F# running on Linux

2

Getting F# running on Linux took a lot more effort than I anticipated. I am documenting the process here in the hope it may benefit someone (maybe myself) in the future. For reference, I am using OpenSuse 13.1.

  • F# is not compatible with all versions of Mono. For example, my distro repos have Mono 3.0.6 which appears to have some issues with F#. Instead, I found some people make new Mono packages available for various distros using Opensuse Build Service (OBS). For example, check out tpokorra repos for various distros such as OpenSUSE, CentOS,  Debian etc. I installed “mono-opt” and related packages. It installed mono 3.4 into /opt/mono directory.
  • If you install mono into /opt/mono, then ensure that you do append “/opt/mono/lib” into the LD_LIBRARY_PATH environment variable and /opt/mono/bin to the PATH variable. I did this in my .bashrc.
  • By default, /opt/mono/bin/mono turned out to be a symlink to /opt/mono/bin/mono-sgen. Now it appears that Mono has two versions: one using sgen GC and one using Boehm GC. I have had trouble with compilng F# using mono-sgen so I removed /opt/mono/bin/mono and then created it as a symlink to /opt/mono/bin/mono-boehm.
  • Now open up a new shell. In this shell, set up a few environment variables temporarily required for building F#. First, “export PKG_CONFIG_PATH=/opt/mono/lib/pkgconfig”. Next, we need to setup some GC parameters for Mono. It turns out compiling F# requires a lot of memory and Mono craps out with default GC parameters. I have a lot of memory in my laptop, so I set the Mono GC to use upto 2GB as follows: “export MONO_GC_PARAMS=max-heap-params=2G”. These two settings likely won’t be required after you have compiled and installed F#.
  • Now you can follow the instructions given on the F# webpage.

Specifically I did

  • git clone https://github.com/fsharp/fsharp
  • cd fsharp
  • ./autogen –prefix /opt/mono  #Keep things consistent with rest of mono install
  • make  #Takes a lot of time
  • su
  • make install
twittergoogle_plusredditmail

Fashion and buzzwords in software industry

0

As I get somewhat older, I have come to the realization that attempting to keep track of all the buzzwords and current fashions in the software industry is counterproductive to actual work. At some point, I have to draw a line. Trends, programming styles, languages and APIs all come and go.

It is important to keep oneself updated, but like anything else, it should be done in moderation. It is also perhaps more important to get better at fundamental computer science ideas than necessarily knowing 10 APIs to do the same thing.

Anyway, I intend to a bit more selective about which technologies I learn, focusing on a few at any given point of time. The idea is not to stand still, but rather to have a focus period bigger than a goldfish. For rest of 2014, I have made a much bigger learning list about fundamental CS ideas and a much shorter list of “technologies”. I will keep posting about some of the books I am reading.

twittergoogle_plusredditmail

Metal compute notes

0

I have been reading some Metal API documents. Some brief notes about Metal compute from the perspective of pure compute and not graphics:

Kernels:  If you know previous GPU compute APIs such as OpenCL or CUDA etc. you will be at home. You have work-items organized in work-groups. A work-group has access to upto 16kB of local memory. Items within a work-group can synchronize but different work-groups cannot synchronize. You do have atomic instructions to global and local l memory.  You don’t have function pointers and while the documentation doesn’t mention it, likely no recursion either.  There is no dynamic parallelism either. You also cannot do dynamic memory allocation inside kernels. This is all very similar to OpenCL 1.x.

Memory model:  You create buffers and kernels read/write buffers. Interestingly, you can create buffers from pre-allocated memory (i.e. from a CPU pointer) with zero copy provided the pointer is aligned to page boundary.  This makes sense because obviously on the A7, both CPU and GPU have access to same physical pool of memory.

CPU and GPU cannot simultaneously write to buffer I think. CPU only guaranteed to see updates to buffer when the GPU command completes execution and GPU only guaranteed to see CPU updates if they occur before the GPU command is “committed”. So we are far from HSA-type functionality.

Currently I am unclear about how pointers work in the API. For example, can you store a pointer value in a kernel, and then reload it in a different kernel? You can do this in CUDA and OpenCL 2.0 “coarse grained” SVM for example, but not really in OpenCL 1.x. I am thinking/speculating they don’t support such general pointer usage.

Command queues:  This is the point where I am not at all clear about things but I will describe how I think things work. You can have multiple command queues similar to multiple streams in CUDA or multiple command queues in OpenCL. Command queues contain a sequence of “command buffers” where each command buffer can actually contain multiple commands. To reduce driver overhead, you can “encode” or record commands in two different command buffers in parallel.

Command queues can be thought of as in-order but superscalar. Command buffers are ordered in the order they were encoded. However, API keeps track of resource dependencies between command buffers and if two command buffers in sequence can be issued in parallel, they may be issued in parallel. I am speculating that the “superscalar” part applies to purely compute driven scenarios, and will likely apply more to mixed scenarios where a graphics task and a compute task may be issued in parallel.

GPU-only: Currently only works on GPUs, and not say the CPU or the DSP.

Images/textures: Haven’t read this yet. TODO.

Overall, Metal is similar in functionality to OpenCL 1.x. and it is more about having niceties such as C++11 support in the kernel language (the static subset) so you can use templates, overloading, some static usage of classes etc.  Graphics programmers will also appreciate the tight  integration with the graphics pipeline. To conclude, if you have used OpenCL or CUDA, then your skills will transfer over easily to Metal. From a theory perspective it is not a revolutionary API, and does not bring any new execution or memory model niceties. It is essentially Apple’s view on the same concepts and focused on tackling of practical issues.

twittergoogle_plusredditmail

Driver overhead matters more on SoCs

0

There has been a lot of discussion about driver overhead in graphics and compute APIs recently. A lot of it has been centred around desktop-type scenarios with discrete GPUs. But just wanted to point out that driver overhead matters more on SoCs which integrate both CPU and GPU on the same chip.

The simple reason is SoCs have a fixed total power budget and modern SoCs dynamically distribute power budget between CPU and GPU. If there is a lot of driver overhead, which means CPU is doing a lot of work, then CPU eats a bigger part of the fixed power budget and thus the SoC may be forced to reduce the GPU frequency.  In addition to power, caches and memory bandwidth may also be shared.

I have done some benchmarking and tuning of OpenCL code for Intel’s Core chipsets and often getting the best performance out of the GPU required being more efficient on the CPU. I am pretty sure similar strategy is applicable on smartphone SoCs with the added constraint that smartphone CPUs are usually wimpy due to power constraints.

 

twittergoogle_plusredditmail

apitest results on AMD comparing various OpenGL and D3D11 approaches

0

UPDATE: Issues related to texture arrays appears to be an application error. Michael Marks provides a fork that corrects some issues.

UPDATE 2: I reran some of the Linux benchmarks, earlier Linux results appear to have a bug. Performance on Linux and Windows now similar.

The strengths and weaknesses of OpenGL compared to other APIs (such as D3D11, D3D12 and Mantle) and the recent talk Approaching Zero Driver Overhead (AZDO) have become topics of hot discussion.  AZDO talk included a nice tool called “apitest” that allows us to compares a number of solutions in OpenGL and D3D. Hard data is always better than hand-wavy arguments. In the AZDO talk, data from “apitest” was shown for Nvidia hardware but no numbers were given for either Intel or AMD hardware. Michael Marks ran the tool on Linux and had some interesting results to report that imply that AMD’s driver have higher overhead than Nvidia’s driver.

However, I wanted to answer slightly different questions. For example, if we just restrict to AMD hardware, how does the performance compare to D3D? What is the performance and compatibility difference between Windows and Linux? And what is the performance of various approaches across hardware generations? With these questions in mind, I built and ran the apitool on some AMD hardware on both Linux and Windows.

Hardware: AMD A10-5750M APU with 8650G graphics (VLIW4) + 8750M (GCN) switchable graphics. Catalyst 14.4 installed on both Linux and Windows. Catalyst allows explicit selection of graphics processor.  Laptop has a 1366×768 screen.

Build: On Windows, built for Win32 (i.e. 32-bit) using VS 2012 Express and DX SDK (June 2010). Release setting was used.  On Linux, built for 64-bit using G++ 4.8 on OpenSUSE 13.1.  Required one patch in SDL cmake file.

Run: Tool was run using “apitest.exe -a oglcore -b -t 15″ which is the same setting as Michael Marks. On Linux, it was run under KDE and desktop effects were kept disabled in case that makes a difference.

Issues encountered:

I encountered some issues. I am not sure if the error is in the application, the user (i.e me) or the driver.

  1. Solutions using shader draw parameters (often abbreviated as SDP in the talk) appear to lead to driver hangs on GCN and are unsupported on VLIW4. Therefore I have not reported any  SDP results here. Michael Marks also saw the same driver hangs on GCN on Linux, did some investigation and has posted some discussion here.
  2. Solutions involving ARB_shader_image_load_store (which is core in OpenGL 4.2 and not some arcane extension) appear to be broken on Windows but are working on Linux despite installing the same Catalyst version.  On Windows, the driver appears to be reporting some compilation error for some shaders saying that “readonly” is not supported unless you enable the extension. .UPDATE: Was application bug.
  3. GCN based 8750M should support bindless textures. However, some of the bindless based solutions failed to work. For example GLBindlessMultiDraw failed.  Sparse bindless also failed to work.

Data:
I did not test 8750M on Linux, partially because I am lazy and partially because I did not want to disturb my Linux setup which I use for my university work. Anyway, here is the data for 3 problems covered by apitest.

Dynamic streaming

Solution 8650G Windows (FPS) 8650G Linux (FPS) 8750M Windows (FPS)
D3D11MapNoOverwrite 14.629 0 19.6
D3D11UpdateSubresource 0.978 0 1.198
GLMapPersistent 19.987 19.471 20.885
GLBufferSubData 0.89 1.015 0.843
GLMapUnsynchronized 0.397 0.409 0.362

Textured quads

GLBindlessUnsupportedUnsupported112.7

Solution 8650G Windows (FPS) 8650G Linux (FPS) 8750M Windows (FPS)
D3D11Naive 65.11 0 42.463
GLTextureArrayMultiDraw-NoSDP 346 400.25 492
GLTextureArray 235 276.67 350
GLNoTex 215.94 275.57 347.608
GLTextureArrayMultiDrawBuffer-NoSDP 212 239 472
GLNoTexUniform 81.429 93.38 133.115
GLTextureArrayUniform 80.75 92.72 109.5
GLNaiveUniform 32.717 31.64 32.059
GLNaive 27.3 15.02 27.21

Untextured quads:

Solution 8650G Windows (FPS) 8650G Linux (FPS) 8750M Windows (FPS)
D3D11Naive 4.078 0 2.16
GLMultiDraw-NoSDP 17.221 17.661 19.93
GLMapPersistent 10.844 11.089 13.687
GLDrawLoop 10.615 10.45 13.59
GLBufferStorage-NoSDP 9.862 10.041 5.096
GLMultiDrawBuffer-NoSDP 9.069 9.404 7.703
GLMapUnsynchronized 5.908 5.726 7.281
GLTexCoord 5.702 5.382 4.963
GLUniform 3.282 3.53 4.399
GLBufferRange 3.031 3.509 3.119
GLDynamicBuffer 0.361 0.515 0.37

Conclusion:

  1. The theoretical principles discussed in the AZDO talk appear to be sound. The “modern GL” techniques discussed do appear to substantially reduce driver overhead compared to older GL techniques. The reduction was seen on AMD hardware on both Windows and Linux and worked on two different architectures (VLIW4 based APU, GCN based discrete). In particular, persistent buffer mapping (sometimes called PBM) and multi-draw-indirect (MDI) based techniques seem useful.
  2. On Windows,  the best OpenGL solutions do appear to significantly outperform D3D. I am not an expert on D3D so I am not sure if better D3D11 solutions exist.
  3. If a test ran successfully on both Windows and Linux, then the performance was qualitatively similar in most cases.
  4. However, while theoretically things look good, in practice some issues were encountered. Some of the solutions failed to execute despite theoretically being supported by the hardware.  In particular, shader draw parameters as well as some variations of bindless textures appear to be problematic. I am not sure if it was the fault of the application, the user (me) or the driver.
twittergoogle_plusredditmail

OpenGL, OpenGL ES compute shaders and OpenCL spec minimums

0

Well, I really got this one wrong. Previously I had (mistakenly) claimed that OpenGL compute shader, OpenGL ES compute shader etc. don’t really have specified minimums for some things but I got that completely wrong. I guess I need to be more careful while reading these specs as the required minimums are not necessarily located at the same place where they are being explained.  Some of the OpenCL claims still stand though OpenCL’s relaxed specs are a bit more understandable given that it has to run on more hardware than others.

Here are the minimums:

  • OpenGL 4.3: 1024 work-items in a group, and 32kB of local memory
  • OpenGL ES 3.1: 128 work-items in a group, and 16kB of local memory (updated from 32kB, Khronos “fixed” the spec)
  • OpenCL 1.2:  1 work-item in a group, 32kB of local memory
  • OpenCL 1.2 embedded: 1 work-item in a group, 1kB of local memory
  • DirectCompute 11 (for reference): 1024 work-items in a group, 32kB of local memory

Thanks to Graham Sellers and Daniel Koch on twitter for pointing out the error. I guess I got schooled today.

twittergoogle_plusredditmail

Khronos standards and “caps”

0

UPDATE: This post is just plain wrong. See correction HERE. Thanks to various people on twitter, especially Graham Sellers and Daniel Koch for pointing this out.

Just venting some frustration here. One of the annoying things in Khronos standards is the lack of required minimum capabilities, which makes writing portable code that much harder.  The minimum guarantees are very lax.  Just as an example, take work-group sizes in both OpenCL and OpenGL compute shaders. In both of these, you have to query to find out the maximum work group size supported which may turn out to be just 1.

Similarly, in OpenGL (and ES) compute shaders, there is no minimum guaranteed amount of local memory per workgroup. You have to query to ask how much shared memory per workgroup is supported, and the implementation can just say zero because there is no minimum mandated in the specification.

edit: Contrast this with DirectCompute where you have mandated specifications for both the amount of local memory and the work-group sizes which makes life so much simpler.

twittergoogle_plusredditmail

Specifying OpenGL version to Qt Quick 2

0

UPDATE (24th May, 10.20pm EST): Corrected some errors below.

I was looking at how to integrate custom OpenGL content inside a Qt Quick 2 application. Qt Quick 2 has several solutions, and settled on a custom OpenGL underlay. However, one stumbling block was that I wanted to use OpenGL 4.3 core profile. On my test machine, the driver supports OpenGL 4.4 and I discovered that by default Qt Quick 2 created a 4.4 compatibility profile context.

After much searching, I finally found the solution I was looking for. I am not sure if it is the only way or the optimal way, but it worked for me. You can specify the OpenGL propeties including the desired profile by creating a QSurfaceFormat object and passing it to QQuickWindow object using setFormat before Qt Quick’s scene graph begins rendering, i.e. before you invoke QQuickWindow::show either in C++, or before you set “visible” property of the window to true in QML.

The next question was, how to get to the window. Here is how to do so in some scenarios:

  •  If you have a handle to any QQuickItem inside the window, then simply call the “window” method.
  • If you have created your application using Qt Quick Controls ApplicationWindow, and are using QQmlApplicationEngine to load the QML, make sure “visible” property of the ApplicationWindow is set to false in the QML. Then, simply get the first root object from the engine using something like engine->rootObjects().first(). This is your window object and you can simply cast it to QQuickWindow.
  • If you have created your application using QQuickView, well then your are in luck because QQuickView is a QQuickWindow so just  call setFormat on the QQuickView.

Once you have done setFormat, you are then free to call “show” on the QQuickWindow and it will make your window visible.

 

twittergoogle_plusredditmail

Geekbench 3 IPC

0

Geekbench 3 is one of the better benchmarks out there for comparing mobile CPU performance. It contains a variety of tests and reports a cumulated single-core score, and a multi-core score. One way of analyzing processors is to get an idea of per-cycle performance. For this, I took the single-core scores for various processors and divided by the reported clock frequency to obtain the following metric: Geekbench 3 Single-core score/ GHz.

I report the results in the table below. There can be many implementations of a given ARM core in different chipsets, and the same chipset can also perform slightly differently in different devices. I report the device from which I got the scores. Even so, computations are very approximate and based on rough averages of geekbench 3 scores from different users as reported on the Geekbench browser.

Here it is:

CPU core Device(s) Score/GHz
Cortex A7 Asus Memo Pad HD 7 250
Scorpion Galaxy S2 X (T989D) 250
Cortex A9 Galaxy S2 (i9100) 290
Krait 200 HTC One S, Xperia ZL 330
Krait 300 Moto X 390
Krait 400 Nexus 5, LG G2 405
Cortex A15 Nvidia Shield 480
Apple A6 iPhone 5c 540
Apple A7 (32-bit) iPhone 5s 800
Apple A7 (64-bit) iPhone 5s 1050

Note that for Scorpion, reported frequency was 1.5GHz but I have never seen it go above 1.242 GHz on some devices I used previously so I used 1.242GHz as the frequency.

twittergoogle_plusredditmail
Go to Top