Donnerstag, 14. Juli 2011

Evaluation of the .NET-lecture

Hello everybody,

as I wrote in this post, I held a lecture on modern programming concepts using the example of .NET. I have organized the practical courses for a freshman lecture but this has been the first time fully in charge of a lecture. At the end of each semester, our faculty conducts an evaluation of all its lectures and practical courses and today I got my results.

I'd like to share them with you and I must say that I'm overwhelmed and very happy to receive this kind of feedback! With a quality index of 100 it simply can't get any better!

So this goes out to all of my students: Thank you for your collaboration and for making this lecture such a success!

Link: Lecture evaluation (german)

K!     

Mittwoch, 13. April 2011

The .NET Multicore Group to offer a lecture on modern development concepts

For many years, our chair offers several lectures on software design, design patterns and modern programming concepts. The base lectures for freshman students all base on Java because this has become a common standard across different software engineering chairs here in the faculty for Informatics and Computer Science.

For students majoring in software engineering we also offer .NET where we teach modern concepts of programming environments. This semester, I will offer this lecture, enriching the existing material with results from our research and practical courses.

Apart from theoretical aspects like the CLR, CTS or MSIL, I'd like to support a profound knowledge about these conepts via practical courses. If you want, take at look at our material. I'll publish all lecture slides and courses on the lecture site.

Greetings,
K!

Mittwoch, 6. April 2011

Loop field types - Loop exterior fields

On iteration exterior fields.

When trying to investigate the parallelization potential of fields used in loops, I suggest the differentiation between iteration interior and iteration exterior fields, as depicted here.

Today, I want to say something about the latter field type.

From the perspective of a single iteration of a loop, there are only two different kinds of fields: The ones that are declared within the loop itself and the ones that are not. So, from this perspective, fields can either have a local character (declared within the loop iteration) or a global character (declared 'outside'). Fields, that are declared beyond the scope of a single loop iteration can either be
- method fields declared within the scope of the method containing the loop,
- object fields declared within the scope of the class instance containing the method or
- class fields declared within the scope of the class.


Knowing this we can define patterns that tell us how to turn this into parallel code.We curently develop a tool called "AutoAnalyzer" that automatically detects the different field profiles for a given application. Based on this information, we want to be able to suggest a suitable parallelization for a given method of loop.

Freitag, 25. März 2011

SRG student to win the RSA award 2012!

Hey folks,

just a short note to let you all know that the KIT awarded one of our student positions for their excellence in research! The Research Student Award (RSA) honors student involvements that have very close influence to current research.

I feel very flattered :c)

K!

Mittwoch, 12. Januar 2011

AutoProfiler - Master/Worker identification

Welcome back,

today I'd quickly like to share some results we got from AutoProfiler concerning the pattern extraction from runtime profiles. As test benchmark we used Parallel Programming Samples offered by Microsoft, and a Desktop Search we developed and parallelized manually as a real world example.

For this first approach, we analyzed the control flows of all actual programs by automatically instrumenting the binary and collecting the following indicators:
  • Number of times a method is called
  • Method-inclusive time share
  • Method-exclusive time share 
According to these information we tried to determine the software pattern that could be used to parallelize the specific piece of code. As we have a manually parallelized version of all benchmark programs, we can identify, which methods have been touched by developers during their manual paralleization and what pattern they have used.

The following table shows the results of one benchmark and compares them to the suggestions we got out of AutoProfiler.

Method #calls %incl. time %excl. time Manual pattern AutoProfiler Suggestion
PerfSimStep() 120 91,82 0,10 Worker Worker
Sim1Step() 7260 91,40 57,05 Worker Master

I think the next step is to take a deeper look into data dependencies in order to better distinguish different patterns. This could well be Master/Worker from Pipeline. Also, we try to cope with a special form of calls from a mater to a worker thread: If a detected worker calls a method very often with no high CPU-load per call, it might lead to a performance gain to inline this worker instead of an explicit spawn-off-execution in a separate thread.

K!

Dienstag, 21. Dezember 2010

AutoProfiler - Detecting parallelization potential

As I wrote in my last post, we focus on rather coarse-grained parallelization schemes and patterns. In our research group we developed AutoProfiler, a tool to detect parallelization potential from  analyzing the runtime profile of a sequential application.

Also, our results tell us, that we detect most of the methods, that would be parallelized by an experienced developer in manual parallization process.

Today,  I want to write something about how AutoProfiler reveals this.

In my post on parallelism of the future, I already stated, that a combination of static any dynamic approaches is the most promising way. So AutoProfiler start with the creation of a dynamic runtime profile. Specifically, AutoProfiler records indicators such as the number of times a method is being called, the runtime share of a method and so on. After that, a metric tries to map those indicators together with the actual indicator values to known parallel software design patterns.

One example. Let's say we have a method foo() that calls some other methods in its body, such as bar1(), bar2() and bar3(). These three methods use most of the runtime of foo(). For this specific example, AutoProfiler would come to the conclusion, that foo() is a the master ins a master/worker-pattern. The calls to bar1() to bar3() sould be made in parallel as they are the workers.


Of course, it's not as simple, as the control flow has to be seen in combination with the data flow. The method foo() might even be a whole pipeline with different pipeline stages, but I think, the main aspect of AutoProfiler comes clear to you.

If you know my blog you know that this is just one single piece of the puzzle - the big picture still is to bring all this together in one IDE. It's crucial to bring this info closer together and closer to the developer.

Greetings,
K!

Freitag, 3. Dezember 2010

The .NET-Multicore Group

Hey there,

as our research project grows, we initiiated the ".NET Multicore Group" at the Karlsruhe Institute of Technology (KIT). Out project website can be found here.


Currently, we have 3 scientific assistants and 5 students working together. When you know my blog a bit, it might not be a big surprise, that our research interest is the preparation and conversion of sequential application to parallel and the runtime defects that arise in parallel applications.

Have a nice weekend.

K!