C++11 / C++0x Documentation

1 11 2011

You want to improve your “old” C++ code base by allowing new features or bug fixes to be enhanced with C++11? The questions you will face on the way are manifold. First, what exactly allows me C++11 to do? Where is some documentation? What is the purpose of those features. Compared to languages like Ruby or Python the C++ standard library and language itself is not too well documented online.

As a preparation for a lecture I’m holding in the next semester I was compiling a list of links to C++11 / C++0x documentation sites and as some kind of personal archive I will post them here:

General C++11 / C++0x FAQ

New Features (Overview)

Compiler Support

Standard Library Documentation

I will update this list as I find new sources. If you have any more links you think that are missing, please mention them in the comments and I will add them.
-Martin
Advertisements




Easy parsing of sysfs to get CPU topology data using libsystopo

20 01 2011

When you are working with main memory it is crucial to make sure that all the data structures are correctly sized and aligned. A typical approach is to create blocks of data that are processed independently. For the developer, the question is how large such blocks should be? The answer is that those blocks should always be either cache sized.

Now, how large is the cache on the system you are using? Either you can go for experiments detecting the different cache levels and the cache line sizes, or if you are happy to have a Intel Linux system at hand, to simply explore the information as it is stored in the sysfs filesystem exported by the Kernel.

However, parsing this information at development time might be ok, at run-time the best way is to adjust the system settings based on the actual configuration. At the moment, libudev does not support to read the information, so your off to yourself.

To avoid that everybody writes the same code I took some time to write a small library that reads the information about the CPU caches and the CPU topology and allows to easily process this information in your program. You can find the most current version of the code as usual at Github.

#include <systopo.h>
using namespace systopo;

int main(void)
{
    System s = getSystemTopology();
    return 0;
}

The System definition parses all data from sysfs and can be reviewed here:

    struct Cache
    {
        size_t coherency_line_size;
        size_t level;
        size_t number_of_sets;
        size_t physical_line_partition;
        size_t size;
        size_t ways_of_associativity;
    
        std::string type;
    
    };

    struct Topology
    {
        size_t core_id;
        size_t physical_package_id;
        std::vector<size_t> core_siblings;
        std::vector<size_t> thread_siblings;
    };
    
    struct CPU
    {
        std::vector<Cache> caches;
        Topology topology;
    };


    struct System
    {
        std::vector<CPU> cpus;
        std::vector<size_t> online_cpus;
        std::vector<size_t> offline_cpus;
    };

For more information about the meaning of the Topology please refer to the Kernel documentation. The meaning for the CPU cache fields should be clear or refer to “What every programmer should know about memory”.

If you have feedback, comments or ideas, I’m glad to respond!

– Martin





Switch, case, typelists and type_switch

31 08 2010

Whenever you are building a system that has it’s own type system you will come to a point where you perform type dependent operations. If your types seamlessly map to standard integral types most of the mapping code and extraction can be handled by simple template methods, but from time to time you will find the following code fragments:

switch(type)
{
case IntegerType: /**/
do_something_important<int>(value);
break;
case DoubleType: /**/
do_something_important<double>(value);
break;
}

Interestingly the only difference in the above code line is only the requested type. A concrete example is e.g. hashing of a certain value. The type of the value is stored in a variable and depending on the actual type different hash functions have to be called. When you find something like this the first time, you will feel ok, the second time a little more nervous and the third time…

The question is now: How can I rewrite my code so that it will be less explicit and most important easier to extend. The biggest problem with the above solution is that once you extend your type system the whole code will be changed and there must be an easier way to solve this.

One of my first approaches was macro magic to iterate over a sequence and than generate the right code by text expansion. However this will not work out due to the fact that macros in C++ are not recursive and will not be called a second time. Reading in “Modern C++ Design” by Alexandrescu — a must read — I stumbled upon typelists (well I had some support by @bastih01). After one evening screwing my had around them finally I made some progress.

The solution I found is based on static recursive template generation plus dynamic type switching at runtime. The reason for this rather complicated approach is the following: while the general type information is available at compile-time, the explicit instance related type information can only be mapped using an enum at run-time.

Enough words lost, whats the solution?

Consider the following setup: First we define the type list and the enum for storing the type information.


#include <boost/mpl/vector.hpp>

typedef enum
{
IntegerType,
FloatType,
StringType
} DataType

typedef boost::mpl::vector<int, float, std::string> basic_types;

Now we need to implement our type_switch operator, basically it is based on TinyTL but used in a Boost environment, because I did not find anything alike in Boost directly.

template <typename L, int N=0, bool Stop=(N==boost::mpl::size<hyrise_basic_types>::value)> struct type_switch;
        
template <typename L, int N, bool Stop>
struct type_switch
{
    template<class F>
    typename F::value_type operator()(size_t i, F& f)
    {
        if (i == N)
        {
            return f.operator()<typename boost::mpl::at_c< hyrise_basic_types, N>::type>();
        } else {
            type_switch<L, N+1> next;
            return next(i,f);
       }
    }
};

template <typename L, int N>
struct type_switch<L, N, true>
{
    template<class F>
    typename F::value_type operator()(size_t i, F& f)
    {
         throw std::runtime_error("Type does not exist");
     }
};

If you look at the above code for the first time it is kind of weird to understand what is going on. But once you understand template recursion it’s totally clear. But easy things first: boost::mpl::size defines a template that contains the size of the typelist hyrise_basic_types. The boost::mpl::at_c template defines an random accessor to the typlist based on a constant index.

The template type_switch is a special construct with 3 parameters, the first is the type list, the second is the current position in this list defaulting to 0, and the third is a boolean parameter determining if the recursion should stop. The default implementation of this struct with the operator() method checks if the current value i is equal to N and if this is the case calls the operator() method on the function object submitted as a parameter. If this is not the case it instantiates a new template and increases N by 1. This is possible because all int values for the complete list are known in advance at compile time and so they can be used as template parameters. To avoid infinite recursion a dedicated template specialization with Stop=true provides an implementation that should never be called and does not further invoke any template recursion.

But back to the functor used in this setting. We have one requirement on the functor and this is that we have to specify the value_type of the operator() method directly in the functor. A sample implementation based on boost hash could look like the following.

template<typename T>
struct hash_functor
{
    typedef T value_type;
    AbstractTable* table;
    size_t f;
    ValueId vid;

     hash_functor(): table(0) {}

     hash_functor(AbstractTable * t, size_t f, ValueId v): table(t), f(f), vid(v) {}
            
     template<typename R>
     T operator()()
     {
         return boost::hash<R>()(table->getValueForValueId<R>(f, vid));
     }
};

For this functor, T defines the return type of the functor with an valye_type typedef and R is the type of the actual type used for the type_swtich. Instead of the clustered switch case statement the code for my type depended hash value method looks a lot better.

size_t hash_value(AbstractTable * source, size_t f, ValueId vid)
{
    hash_functor<size_t> fun(source, f, vid);
    type_switch<hyrise_basic_types> ts;
    return ts(source->typeOfColumn(f), fun);
}

The last sentence goes to the cost of this access: Each level of hierarchy generates at least one method call plus an evaluation of an if statement. The generic switch/case only generates the comparison, but I think this overhead is neglectable compared to the huge amount of time saved when it comes to extending the usable data types.





Using Intel Threading Building Blocks in a C++ Xcode Project under Mac OS X

18 11 2009

This tutorial should explain briefly how to include and use the Intel Threading Building Blocks (TBB) in a C++ Xcode Project in OSX. Intel has put together an excellent tutorial that explains the basics, but it assumes that you have the libraries already included in your project.

1) Download the source code from a stable TBB release (e.g. tbb22_20090809oss_src.tgz).

2) Unpack it and run “make”.

3) Create a new project in Xcode, e.g. Command Line Tool from type C++ stdc++. The project in this example is called “tbbWorkbench”.

4) Open Finder and copy the tbb folder into your project’s folder:

5) Go back into Xcode and link the libraries “libtbb.dylib” and “libtbbmalloc.dylib” from the tbb22_20090809oss/build/macos_intel64_gcc_cc4.2.1_os10.6.2_release/ folder that is inside your project folder to your target.

6) Add the “Header Search Path” (tbb22_20090809oss/include) and make sure that you always search user paths.

7) Now, add a “New Copy Files Build Phase” where you copy the both TBB .dylib files to your “Products Directory”. This is needed as otherwise the libraries won’t be found during runtime and you’ll face a “Error from Debugger: Cannot access memory at address 0x0”.

8 ) As a final step, you can now e.g. insert the first sample code from the Intel Tutorial and you’re ready to go ! 🙂

– Christian