Geekbench 3 IPC

Geekbench 3 is one of the better benchmarks out there for comparing mobile CPU performance. It contains a variety of tests and reports a cumulated single-core score, and a multi-core score. One way of analyzing processors is to get an idea of per-cycle performance. For this, I took the single-core scores for various processors and divided by the reported clock frequency to obtain the following metric: Geekbench 3 Single-core score/ GHz.

I report the results in the table below. There can be many implementations of a given ARM core in different chipsets, and the same chipset can also perform slightly differently in different devices. I report the device from which I got the scores. Even so, computations are very approximate and based on rough averages of geekbench 3 scores from different users as reported on the Geekbench browser.

Here it is:

CPU core Device(s) Score/GHz
Cortex A7 Asus Memo Pad HD 7 250
Scorpion Galaxy S2 X (T989D) 250
Cortex A9 Galaxy S2 (i9100) 290
Krait 200 HTC One S, Xperia ZL 330
Krait 300 Moto X 390
Krait 400 Nexus 5, LG G2 405
Cortex A15 Nvidia Shield 480
Apple A6 iPhone 5c 540
Apple A7 (32-bit) iPhone 5s 800
Apple A7 (64-bit) iPhone 5s 1050

Note that for Scorpion, reported frequency was 1.5GHz but I have never seen it go above 1.242 GHz on some devices I used previously so I used 1.242GHz as the frequency.

Qt Creator and VS 2013 (Express)

I had previously noted that I installed Qt Creator 3.0 and it automatically detected compiler toolchains, CMake etc. At that time I was running VS 2010 Professional. I recently moved to VS 2013 Express. I opened up a new CMake project in Qt Creator and noticed that suddenly it no longer gave me the option for the proper NMake generator and started failing. After much mucking about, I have finally discovered the solution and just documenting it here in case someone else comes across the same issue.

The solution is simple.  Simply install the newly released Qt Creator 3.1. Qt Creator 3.1 has inbuilt support for detecting and using VS 2013 compilers with CMake. Qt Creator 3.0 detects the compilers succesfully but was not really passing the right arguments to CMake or something.  Anyway, after updating to 3.1,  all my CMake projects work now.

 

Number of Linux desktop users

Just a random thought. I was wondering how many active Linux desktop users are there.  Turns out, we can do simple back-of-envelope calculation for this. Some estimates put the number of active desktops at around 1.4 billion. Linux desktop marketshare (from various stats such as netmarketshare and steam) appears to be around 1.5%. That leads us to around 21m active Linux desktop users. 

Some notes/thoughts from BUILD news so far

Just some thoughts from MS announcements so far:

  • Universal apps: WinRT now available on WP 8.1 is very big news. A lot of the code can now be shared between Windows Store and Windows Phone Store apps and VS project system has also been evolved to make it easier. “Universal apps” are not really a single package. You still generate and upload different packages for say Phone and Windows Store but that is just an implementation detail. Code sharing and unified API is the big news.
  • Direct3D 12: Still very few details, particularly for compute. I do hope they have substantial new DirectCompute features, particularly so that C++ AMP can evolve, but no announcements so far.  MS has not committed on exact OS support yet but looks very likely that Windows 8.1+ will definitely be supported. Windows 7 is “under consideration”.
  • Sandboxing and third-party JITs and interpreters: I think the situation here is unchanged. Third-party Store apps still don’t have access to appropriate memory APIs for security reasons so third-party JITs are still not possible. AFAIK interpreters exposed to users are still not allowed in Store either. So forget being able to distribute Python as a Windows Store app. I guess I am a niche here, but as a compiler writer and programmer this still bums me out.  As far as developer tools are concerned, Desktop apps is still the way to go.

Using Qualcomm Adreno Profiler on Linux, particularly for OpenCL

Qualcomm has updated their Adreno Profiler and now it can work on Linux.  To clarify, the setup is that you are running Linux on development PC, and running Android on the device being profiled. However, when I downloaded the tar file containing the Adreno profiler for Linux, it only contained “exe” files so initially I was confused. However, looking at Qualcomm developer forum threads, someone posted that it appears to be working with Mono.

I have tried the following steps, and it seems to work, at least for profiling OpenCL. I have used this for profiling OpenCL apps running on IFC6410 development board (Snapdragon 600) running Android 4.2.2. I have not tried OpenGL apps yet. Please feel free to report your experiences with OpenGL profiling. Steps were as follows

1. Install mono and associated Mono libraries, especially “core 4.0″ and winforms libraries for Mono.

2. Connect your device over USB and make sure it is listed when you do “adb devices”.

3. Set some environment parameters using “adb shell setprop” commands for Adreno profiling. See documentation accompanying Adreno Profiler for exact settings.

4. Start Adreno Profiler using “mono AdrenoProfiler.exe”.

5. Start app on device using adb shell etc. It will block/hang, looking to connect to Adreno profiler.

6. In Adreno Profiler, hit “connect” and connect to your device. App will still remain frozen. Now start “Scrubber CL” and hit the “record” (red) button. App will resume.

7. Once app finishes, examine the data.

Overall, Adreno Profiler offers some really helpful data metrics for OpenCL such as timeline of various threads, GPU queues etc which is similar to say VTune. However, I have not yet found the way to actually view detailed hardware counter data such as cache misses that the documentation says I should be able to view.

Also, while Adreno Profiler offers some static kernel analysis metrics about number of ALU instructions, MOVs, NOPs etc in compiled code, I would really like to just view assembly generated because the instruction metrics shown do not account for what is inside loops vs outside making them less useful in kernels involving loops.

DirectX 12 needs to deliver on compute

DirectX 12 will be unveiled soon. Given that DirectCompute forms the core of MS GPGPU computing stack (such as powering their C++ AMP implementation), I really hope that DirectCompute 12 delivers on the compute side. Unlike OpenCL, DirectCompute has had the massive advantage of good integration into a graphics API and that is why we have seen DirectCompute being adopted into 3D apps such as games where OpenCL really wasn’t. However, DirectCompute has fallen behind the times with little support for modern GPU features. This, in turn, hurts GPGPU solutions such as MS implementation of C++ AMP. Five things that I am hoping to see:

1. Support for shared memory architecture eliminating CPU-GPU data transfers where possible and also allowing platform-level atomics. This will be beneficial for everything from mobile (where SoCs rule the roost and discrete GPUs basically don’t exist) to servers.

2. Exposing multiple command queues per GPU, thus allowing concurrent execution of kernels

3. Launching GPU kernels from within GPU kernels.

4. Stable low-level bytecode that is more suitable as a target for high level compilers. D3D 11 has a bytecode, but ISVs are not given the proper specification and discouraged from using it.  This needs to be opened up to enable third-party compilers for high-level languages

5. Compatibility with Windows 8

Broadcom VideoCore IV architecture overview

Broadcom has decided to open-source their graphics driver for one of their VideoCore IV powered Android chipsets. This is an awesome and welcome step. They also released an architecture manual giving details for many things. I will try and summarize some of the information known about VideoCore IV so far.

VideoCore IV refers to a family of closely-related GPUs. Implementations have  shown up in various chipsets. For example, BCM2835 used in Raspberry Pi,  BCM2763 used in several Nokia Symbian Belle handsets (eg: Nokia Pureview 808, 701,700 etc), BCM21553 in Android handsets such as Samsung Galaxy Y and and BCM28155 in Android handsets such as Samsung Galaxy  SII Plus.

Overview: Various chipsets have their own peculiarities. In the Raspberry Pi and Nokia flavors, the VideoCore IV consists of two distinct processors. The first processor is the actual programmable graphics core, which I will refer to as PGC. The second processor is a coprocessor.  This embedded processor, not to be confused with the main CPU, runs its own operating system and handles almost all the actual work of the OpenGL driver. For example, shader compilation is done on this embedded processor and  not on the main CPU in the Raspberry Pi and Nokia flavors. The OpenGL driver on these devices just is a shim that passes calls to the embedded coprocessor via RPC-like mechanism.  My speculation (low-confidence) is that the BCM21553, for which Broadcom released the source code, does not have the embedded coprocessor and the driver runs on the main CPU.  The Nokia variants have an additional detail that these feature an 128MB LPDDR2 on-package memory dedicated for GPU, separate from the 512MB RAM in these devices, to provide a high-bandwidth (at the time) graphics RAM for the GPU. Raspberry Pi does not have this buffer and the GPU reads/writes from the main memory.

GPU core: VideoCore’s PGC is a tile-based renderer (TBR). Apart from fixed function parts, the programmable portion of the chip is organized into “slices”, which are similar to say “compute units” in GCN. Each slice consists of upto 4 SIMD units called QPUs, one special function unit (SFU),one or two texture and memory units (TMUs) as well as some caches.  The architectural diagram shows upto 4 slices, but I guess the actual number may vary between chipsets (not confirmed).

QPU (SIMD ALUs): QPU consists of two  SIMD ALUs. The ALUs are not symmetric. Each of these ALUs is physically 4-wide (i.e. 128-bit), but one of them is an “add” unit and the other is a “mul” unit, and handle add and multiply floating-point operations respectively along with some other ops such as integer and logical ops. The QPU is a dual-issue processor, capable of feeding one add and one mul instruction per cycle to each of the units. Logically, each ALU in the QPU is actually a 16-way machine that executes a 16-way instruction in 4 cycles. Thus, overall, each QPU can perform 8 flops/cycle.  Thus, each slice can do upto 32 flops/cycle.  Each QPU has access to a 4kB of registers, as well as a few accumulators. Registers are organized as two register files of 2kB each. Each register file is organized as 32 vector registers, where each vector register is 64 bytes (16 x 4bytes) which makes sense given the 16-way logical view of the QPU.  Each QPU can run two threads.

Memory (TMUs and VPM):  TMUs have their own L1 cache, and there is also a separate L2 cache that is shared across slices. Cache sizes are unknown. QPUs read/write vertex data through a separate path called the Vertex Pipe Manager (VPM). VPM is a system-wide shared unit and appears to have a buffer of either 8kB or 16kB. VPM performs DMA from main memory to read/write vertex data into the buffer. VPM is optimized essentially for reading/writing vectors of data from/to main memory and from/to the QPUs vector register files.  Vertex fetch is general enough to implement memory gather operations, but it is not clear if scatter is also supported.

RPi and Conclusions: Consider the Raspberry Pi.  We already know that the published frequency is 250MHz and that the QPUs can do 24 gflops and the TMUs can do 1.5 GTexel/s.  Thus, per-clock,  the GPU performs 96 flops/cycle and 6 texels/cycle.  Likely, this is achieved through 3 slices each with 4 QPUs and 2 TMUs. Overall, VideoCore IV is an interesting architecture. Performance-wise, the implementation in the Raspberry Pi does not compare to modern mobile GPUs such as Adreno 330 or Mali T600 series but then again the Raspberry Pi is using an old SoC that was meant to be cost-conscious even at that time.  For a low-cost GPU, VideoCore IV looks to be quite competent. It will be interesting to see what Broadcom is cooking up for VideoCore V.

Anandtech post on HSA, Kaveri FP perf

Wrote two articles recently for Anandtech.  “A deep dive on HSA” was a theoretical look at HSA and related technologies such as HSAIL, hUMA and hQ as well as the programming tools infrastructure. Next I wrote a bit about the floating point performance (both on CPU and GPU) of recent Intel and AMD chips including Kaveri.

Do check them out and let me know what you think :)

Qt Creator on Windows

I have been using Qt Creator on Linux for a while now for my C++ based projects. Qt Creator has a nice editor with fairly speedy autocomplete and good refactoring support.  I typically use CMake and Qt Creator has decent support for CMake under both Windows and Linux and also has decent integration with tools like mercurial. On WIndows, however, I was using Visual Studio (either 2010 or 2013 depending on the project dependencies) for C++. While VS has a nice debugger, its editing and refactoring functionality  for C++ is quite behind Qt Creator in my experience.  I finally got around to using Qt Creator on WIndows, even for non-Qt projects and I find that it offers the same great experience on Windows. The install process was pretty painless and Qt Creator detected all my tools (various VS versions, CMake binary etc.).

Kudos to Digia and Qt Project for the great tools. I am constantly amazed at the quality of work they put out. I only use VS now for occasionally debugging.  Thoughts welcome.

Testing write bandwidth to regular, write-combined and uncached memory

Write combining is a technique where writes may get buffered into a temporary buffer, and then written to memory in a single largish transaction. This can apparently give a nice boost to write bandwidth. Write-combined memory is not cached so reads from write-combined memory are still very slow. I came upon the concept of write-combining while looking at data transfer from CPU to GPU on AMD’s APUs. It  turns out that  if you use the appropriate OpenCL flags (CL_MEM_ALLOC_HOST_PTR | CL_MEM_READ_ONLY) while creating a GPU buffer on an AMD APU,  then AMD’s driver exposes these buffers as write-combined memory on the CPU. AMD claims that you can write to these buffers at pretty high speeds and thus this can act as a fast-path for CPU to GPU data copies.  In addition to regular and write-combined memory, there is also a third type: uncached memory without write-combining.

I wanted to understand the characteristics of write-combined memory as well as uncached memory compared with “regular” memory allocated using, say, the default “new” operator. On Windows, we can allocate write-combined memory or uncached memory using VirtualAlloc function by passing the flags PAGE_WRITECOMBINE and PAGE_NOCACHE respectively. So I wrote a simple test. The code is open-source and can be found here.

For each memory type (regular, write-combined and uncached), we run the following test. The test allocates a buffer and then we copy the data from a regular CPU array to the buffer and measure the time. We do the copy (to the same buffer) multiple times and measure the time of each copy and report the timing and bandwidth of first run as well as the average of subsequent runs. The first run timings give us an idea of the overhead of first use, which can be substantial. For bandwidth, if I am copying N bytes of data, then I report bandwidth computed as N/(time taken). Some people prefer to report bandwidth as 2*N/(time taken) because they count both the read and the write so that’s something to keep in mind.

I ran the test on a laptop running AMD A10-5750M (Richland), 8GB 1600MHz DDR3, WIndows 8.1 x64, VS 2013 x64. 

The average bandwidth result for “double” datatype arrays (size ~32MB) was average bandwidth of 3.8GB/s for regular memory, 5.7 GB/s for write-combined and 0.33GB/s for uncached memory.  The bandwidth reported here is for average runs not including the first run. The first use penalty was found to be substantial. The first run of regular memory took about 22ms while write-combined took 81ms for first run and uncached memory took 164ms. Clearly if you are only transferring it once, then write-combined memory is not the best solution. In this case, you need around 20 runs for the write-combined memory to break even in terms of total data copy times. But if you are going to be reusing the buffer many times, then write-combined memory is a definite win.

Follow

Get every new post delivered to your Inbox.