On the way to NRW06

On Thursday 27th July, I delivered two sessions at The Community Conference NRW06 in Dusseldorf. The travel started today!

This was my first visit to Germany, I was lucky enough to have a trouble-free journey to Dusseldorf: Edinburgh to London Heathrow then on to Dusseldorf. There was a little excitement on the runway at Edinburgh Airport. We taxied out to the runway and stopped at right-angles to the runway itself – if we looked left and right we could see the entire length of the runway. The captain announced a brief delay and hinted that we should look out to the left of the aircraft…where we saw a couple of military jets bobbing their way through the skies. The captain went on to tell us that they were “low on fuel”.

Arrival in Dusseldorf was remarkably similar to my previous ventures to mainland Europe, most notably Vienna and Amsterdam. The efficiency of the German railway system was obvious from the moment I set foot on the platform. Vandalism of trains in Europe seems to be kept to a minimum, perhaps this is because the Europeans keep their trains moving as much as possible, a trick that we should pay credence to here in UK. And what about the prices? I paid 2€ each way for a 10 minute train ride from Dusseldorf airport to Dusseldorf HBF. How much did I pay to get from Inverkeithing to Edinburgh Airport? £4.50, each way…nearly a tenner.

nrw06-1.gif

I enjoyed dinner and drinks with Dan (aka Lennybacon). We drank “alt” beer, brewed locally. No pints were available, instead, beer was delivered in small glasses. An attentive barman ensured our thirsts were quenched, as soon as our glasses were nearly empty, he brought over two more small glasses. A nice trick, the beer was always cold – think about the latter (bottom) half of a pint, it’s starting to warm up. And small glasses mean we actually drank less – “just one more pint” often brings with it an excess of liquid!

Dusseldorf sits on the river Rhine. I was fortunate enough to be given a very brief tour of the Altstadt (Old town including the river front) by the NRW06 conference host, Lenny Bacon. They’ve done a super job making the river front a social hub. There’s even a fake beach, with sand! In the distance we could the cranes dismantling what was one of Europe’s largest roller-coasters. This was a shame as I do like roller-coasters and would have been “up for it”.

Anyway, we had a reasonably early night – as mentioned earlier in this post, Dan had informed me that I was now presenting two session! Sleep was in order!

Technorati Tags: , , , , ,

TDD, Code Coverage, .net debugging, tracing and instrumentation

The £10 early bird period for this Scottish Developers event runs until the 15th of July – from that date, the cost rises to £25.

There are still a handful of places available, if you’d like to see two Microsoft MVPs talk’n’code about TDD, Code Coverage, .net debugging, tracing and instrumentation and save £15 at the same time, book now!

If you’d like a chance to win a copy of JetBrains’ dotTrace code profiler, be there!

And one lucky person will take home a copy of ReSharper!

Technorati Tags: , , , , , , ,

What’s a Prodcast?

Update, 09/06/2011
As luck would have it, there is now a startup called prodcast! If you arrived here looking for http://prodca.st/, please feel to mosey on over there and have a look around!

Prodcast is a website where you can comment on the products you love and share them with your friends on Facebook and Twitter.

Read more: http://www.killerstartups.com/Web20/prodca-st-review-the-products-you-buy

I’m a little confused. I like the sound of the word Prodcast as used by The IW Center here. But over at the QSNews/BCIS here, it seems it’s something differently completely.

To me, I believe that a Prodcast (if we must have another word for it) is something akin to a product demonstration delivered using the same technologies as a blogcast (i.e. a mix of audio and video).

Comments or thoughts?

NxtGenUG podcast 3

NxtGenUG podcast no.3 is now available.

Show #3 – The one with added security Saturday, July 01, 2006
Episode 3 – Dave turns up late, Rich reveals his secret Halo 2 gaming style, gets his knuckles rapped over security and Dave mentions his trip to Office DevCon. Featuring Steve Lamb on Security and TechnoTotty solves another problem.

Featuring: Steve Lamb on security

Dave and Rich chat with Steve Lamb, ITPro evangelist at Microsoft and self-confessed security geek. Dave may be shot and Rich gets a security rap on the knuckles. [Transcript]

Steve Lamb is an IT Pro Evangelist for Microsoft,UK, specialising in Security Technologies. For the Past 11yrs, he has worked solely as a security professional and during this time architected and implemented technical solutions for many FTSE 100 companies throughout Europe the Middle East and Africa. In addition Steve has worked with the military and governments of various countries.

Steve is well known on in the security community for the entertaining way he approaches a serious subject and most importantly, the impact he has in enabling people to do more with less risk. Security is not just about technology and can be far more complex however, Steve’s specialty is translating “Rocket Science” (deep technical) into Common Sense advice. His favourite topics are combating Malware, embracing PKI and Cryptography in general, Secure wireless networking and strategies for user awareness and dealing with social engineering.

Outside work Steve is a keen Freestyle Windsurfer, Teaches White Water Kayaking and occasionally gets to rip on a Snowboard – more often he wipes out!

Tristan da Cunha? (update: Golden Britannia Penny)

tdc.gifYesterday, the Royal Mail delivered one of those “Only available to the first 150,000 respondents” pamphlets, “urgent attention required”, “time-sensitive documentation inside”. Normally, straight in the bin. Yesterday, I needed something to read for five minutes and that was it.

It seems that in exchange for £5 (GBP), The London Mint Office will send me the new Queen Elizabeth II 80th Birthday £5 coin.

Except, and you do have to read the “small print” which states “the new Queen Elizabeth II 80th Birthday £5 coin is legal tender in Tristan da Cunha. It is redeemable at any time on the Island. Alternatively, it can be redeemed if accompanied by proof of purchase, through The London Mint Office”.

So, in exchange for £5 GBP which is legal tender here in the UK, you can have a shiny coin that is not legal tender in the UK…nice.

Just where is Tristan da Cunha? Well, it’s a remote island in the south Atlantic ocean. From what I can gather, it is only accessible by ship. You are very unlikely to go there on a holiday. Further, the economy of Tristan da Cunha is largely fishing oriented, you are unlikely to find a store selling generic MP3 players or the like! To quote the local policeman (singular): “300 people live here, earning their living from farming, fishing, handicrafts and the sale of colourful postage stamps”. Not really the place you might find Pete Tong on a Friday night.

I have no reason to believe that this is a fraudulent scam, after all, the Royal Mail delivered it and you wouldn’t expect them to be party to anything dodgy. However, do be aware that you’re not exchanging £5 GBP for anything that is legal tender in the UK…all you can do with it is admire it or hopefully receive a refund (postage at your expense!)

A similar story is reported here.

[13/09/2006 update]
After reading some of the incoming comments, it does seem that this is a heavy marketing scam, probably to be avoided. If you feel really strongly about it, perhaps taking it up with the Royal Mail might be the answer. I should add that I didn’t actually part with my cash for this coin, I was merely reading the literature whilst having a seat in my bathroom, then the literature went straight in the bin.

Tony Hetherington invites readers to repeat their tales of woe over here.

[07/10/2008 update]
Thanks to an eagle-eyed comment from a reader (below), the new scam appears to be the “Golden Britannia Penny”. Watch out for that one! More information can be found here:

http://forums.moneysavingexpert.com/showthread.html?t=1143741

Technorati Tags: , , , , , , ,

NRW06 – Developer Community Conference

I am pleased to announce that not only will I be attending NRW06 on the 27th of July 2006, but I’ll also be speaking!

I’ll be talking about test-driven development and code coverage – pretty much the same session that I am delivering to Scottish Developers a week later.

Here’s an outline of what I’ll be covering:

Code Coverage in .NET

Testing code can be a laborious process that is repetitive in its nature. Empirical evidence confirms that most repetitive processes enjoy a lot of success, or coverage, during early iterations, but later iterations suffer from lower coverage as the tedium sets in. For that reason, we sought to automate the repetitive testing process, i.e. we wrote some code that could replace the repetitive process. The development community achieved this by the adoption of a testing framework that embraced Test-Driven Development (TDD) and testing tools such as NUnit.

The ethos behind TDD and NUnit is “write once, use often”, i.e. once a test has been written it can be used many times. Naturally, by embodying “tests” in code and by using a tool to run those tests, we find the repetitive nature of testing disappears and the process of testing actually beings to provide confidence boosts.

However, whilst adoption of TDD and NUnit provides major advances in the reduction of repetitive testing tasks, they do not help us ensure that the tests actually cover as much of the code-base as is possible/required. It is possible to write a collection of tests that only exercise 25% of the code-base, yet because the tests are successful (i.e. they pass), the developer’s confidence is so high, s/he fails to spot that there is still a lot of test code still to be written.

Code coverage, is not a new technique, the likes of Boris Bezier discussed it in 1990 and Tom McCabe wrote about it as far back as 1976. Today, we can use graphical tools to determine how much of our code is exercised, or covered during an execution cycle. Such tools help us identify which areas of our code have not been tested and can help us direct our effort. However, they do rely on some manual effort that is repetitive, i.e. a user/developer must walk through the application. Luckily, if we are practicing TDD, we have a set of automated tests that we can tap into thus alleviate this repetition.

Over the course of 90 minutes Craig will demonstrate four .net tools, NUnit, NCover, NCoverExplorer and TestDriven.Net. All of these tools are free (or very cheap for commercial use) and work with .net 1.1 and 2.0. A variety of IDEs are supported, include Visual Studio 2003 and Visual Studio 2005. He will explain the basics of TDD and code coverage and why they are both important skills and processes to include in your development/build process. Examples will be written in C# and Visual Studio 2005.

And I intend to wear a Metallica t-shirt at the after show party 🙂 Why? Well, attend NRW06 and catch up with conference organiser, Daniel Fisher (aka DDD2 and DDD3 speaker lennybacon) and you’ll see why!

Technorati Tags: , , , , , , , , , , ,

Office 2007 beta – on the PCW cover

If you’re unable to download the Office 2007 beta, you’ll be pleased to know that PCW are shipping it on their cover DVD – August 2006 edition. Seems like a reasonably cheap method for getting your hands on this popular beta.

I have just received mine in the post, I’m told I get it before it hits the shops but since that’s rarely the case, it should be on the shelves now.

JetBrains .NET development tools blog

I am pleased to see yet another company embracing the blogging phenomena – it [blogging] really does allow developers to reach out to their audience and communicate in a collaborative environment where things actually happen.

JetBrains have started a .NET tools blog, using none other than WordPress.

All the usual JetBrains folks are blogging:

Oleg Stepanov | Sergey Dmitriev | Alex Tkachman | Dmitry Jemerov | Kir Maximov | Sasha Maximova | Mike Aizatsky

It’s probable that you have heard of JetBrains, especially if you are using such products as ReSharper, dotTrace, IntelliJ IDEA or you may well be using their RSS reader OMEA.

They’ve also opened up access to the dotTrace 2.0 Early Access Programme (EAP). More can be found here.

NxtGenUG Podcasts…

Richard Costall and Dave McMahon, have released their second podcast.

If you want to hear all about TechnoTotty, England’s football players, sock puppets, DeveloperDeveloperDeveloper, cheesy catchy phrases, or Richard’s experience with SatNav (Richard, please read this!) then it’s worth downloading their podcasts. If that doesn’t float your boat, download it anyway because Microsoft’s very own Mike Taulty talks to Richard and Dave about the Microsoft Windows Communication Foundation (WCF). There’s a full transcript of the interview with Mike available here.

To me, to you…Richard and Dave?

View all the NxtGenUG podcasts here.

Next Scottish Developers event – 3rd August – not to be missed!

.ics calendar entry

Scottish Developers is proud to announce an August half day conference event not to be missed. The place to be is Edinburgh on the 3rd of August 2006 if you have an interest in debugging or code coverage…

AGENDA

13:45 Registration

14:00 Welcome & introductions

14:10 .NET debugging, tracing and instrumentation – Duncan Jones of Merrion Computing

15:40 Break – free beer and pizza anyone!

16:15 Code Coverage in .NET – Craig Murphy of Scottish Developers

18:00 Close (post event entertainment of our speakers in the traditional way)

.NET debugging, tracing and instrumentation

ABSTRACT
In this session we will walk through the built in capabilities of the .NET framework and other tools that provide debugging, tracing and instrumentation for .NET developers. Code examples (in VB.Net) will be included and there will be some statistical analysis on the costs of adding different levels of tracing to your application.”

Sections:
+ The need for debugging, tracing and instrumentation
+ Using the Trace and Debug classes
+ Setting a trace level using a trace switch
+ Writing a custom trace listener
+ The built in .NET performance counters
+ Creating and using custom performance counters
+ The cost of different tracing and instrumentation levels

BIO
Duncan Jones is a Microsoft MVP in Visual Basic.NET and since 2001, the technical half of Merrion Computing Ltd, a company that provides printer monitoring solutions for Microsoft Windows based networks. He has been programming in Basic for over half of his life starting out on the Sinclair ZX-81 and BBC Micro and has used nearly every variant of Microsoft Visual Basic. He has been developing software commercially since graduating from Aston University in 1993 – originally in Birmingham, then Nottingham and for the last 8 years in sunny Dublin. He has published 19 article on Code Project and is currently traversing the North face of the Visual Basic .NET learning curve.

Outside of IT his interests are photography, single malt whisky and the never ending restoration of a 1971 Triumph GT6

Code Coverage in .NET

ABSTRACT
Testing code can be a laborious process that is repetitive in its nature. Empirical evidence confirms that most repetitive processes enjoy a lot of success, or coverage, during early iterations, but later iterations suffer from lower coverage as the tedium sets in. For that reason, we sought to automate the repetitive testing process, i.e. we wrote some code that could replace the repetitive process. The development community achieved this by the adoption of a testing framework that embraced Test-Driven Development (TDD) and testing tools such as NUnit.

The ethos behind TDD and NUnit is “write once, use often”, i.e. once a test has been written it can be used many times. Naturally, by embodying “tests” in code and by using a tool to run those tests, we find the repetitive nature of testing disappears and the process of testing actually beings to provide confidence boosts.

However, whilst adoption of TDD and NUnit provides major advances in the reduction of repetitive testing tasks, they do not help us ensure that the tests actually cover as much of the code-base as is possible/required. It is possible to write a collection of tests that only exercise 25% of the code-base, yet because the tests are successful (i.e. they pass), the developer’s confidence is so high, s/he fails to spot that there is still a lot of test code still to be written.

Code coverage, is not a new technique, the likes of Boris Bezier discussed it in 1990 and Tom McCabe wrote about it as far back as 1976. Today, we can use graphical tools to determine how much of our code is exercised, or covered during an execution cycle. Such tools help us identify which areas of our code have not been tested and can help us direct our effort. However, they do rely on some manual effort that is repetitive, i.e. a user/developer must walk through the application. Luckily, if we are practicing TDD, we have a set of automated tests that we can tap into thus alleviate this repetition.

Over the course of 90 minutes Craig will demonstrate four .net tools, NUnit, NCover, NCoverExplorer and TestDriven.Net. All of these tools and free and work with .net 1.1 and 2.0. A variety of IDEs are supported, include Visual Studio 2003 and Visual Studio 2005. He will explain the basics of TDD and code coverage and why they are both important skills and processes to include in your development/build process. Examples will be written in C# and Visual Studio 2005.

BIO
Craig Murphy is an author, developer, speaker, project manager, Microsoft MVP (Connected Systems) and is a Certified ScrumMaster. Commercially, Craig has been using Borland Delphi since 1998; today, he uses Visual Studio 2005 and C#. He regularly writes articles product/book reviews: The Delphi Magazine, International Developer, ASPToday and Computer Headline have published his work. Craig has written for virtually every Developers Group magazine issue since the year 2000! He specialises in all things related to .NET, C#, Borland Delphi, XML/Web Services, XSLT, Test-Driven Development, Extreme Programming, agile methods and Scrum. In his career to date, Craig has written cost estimating software for the oil and gas industry and asset valuation software for local councils and the Ministry of Defence. He has a day-job, a wife and a son.

Craig can be reached via his web site: http://www.craigmurphy.com

VENUE
Microsoft Scotland
127 George Street, Edinburgh, EH2 4JN

More here:

http://www.scottishdevelopers.com/modules/news/article.php?storyid=152

Tags:

Elementary execution timing using QueryPerformanceCounter

At some point in your career as a developer, you will need to optimise your code to make it run faster or more economically. If you have not been asked by a user to “make that report or feature run/finish faster”, hang on in there, your time will not be far off.

There are many ways of optimising your code such that it offers the user a more responsive experience (i.e. it runs or feels faster). You may well have implemented an algorithm in the knowledge that it is slow, but gets the job done (e.g. a bubble sort). You have then gone on to optimise the algorithm by replacing it with something that you know to be much faster (e.g. the quicksort). Alternatively, your application may have grown over a long period of time, some calculations are now relying on so much data that it is time to re-think how they work. Or, as is the case here, two factors have resulted in part of my application taking a performance hit:

  1. Growth of code that performs progressively cumulative calculations, i.e. calculate a & b, c is based on a percentage of (a+b), d is based on a percentage of (a+b+c), where a, b and c are recalculated when d is requested – yes I know it’s by no means the best way, however the code was reasonably elegant and more importantly, it was very understandable.
  2. Acceptance that the calculations work and that the code looks reasonably elegant, albeit the calculations take “some time” for relatively few records.

How did the need for optimisation raise its head? Well, in this case, simply adding 20-30 records to my application then viewing the on-screen reports was sufficient enough to make me augment the code behind the reports with an hourglass (waitcursor). Whilst the reports worked, the lack of user feedback after the hourglass appeared was of concern to me. For this particular application, 20-30 records is a little more than the average that I would expect it to have to cope with.

Elementary Execution Timing
Luckily, the Win32 API provides a couple of useful methods that we can use to “time” our function calls. Here’s a class that surfaces those methods for use in C#:

[code lang=”C#”]
using System.Runtime.InteropServices;

namespace uSimpleExecutionTimer
{
public class SimpleExecutionTimer
{
[DllImport(“kernel32.dll”)]
extern static short QueryPerformanceCounter(ref long x);
[DllImport(“kernel32.dll”)]
extern static short QueryPerformanceFrequency(ref long x);

private long counter1 = 0;
private long counter2 = 0;
private long freq = 0;

public long TickCount
{
get { return counter2 – counter1; }
}

public long TickFrequency
{
get { return freq; }
}

public long StartTick
{
get { return counter1; }
}

public long EndTick
{
get { return counter2; }
}

public double TickCountSeconds
{
get { return (counter2 – counter1) * 1.0 / freq; }
}

public bool Start()
{
return QueryPerformanceCounter(ref counter1) != 0;
}

public void Stop()
{
QueryPerformanceCounter(ref counter2);
QueryPerformanceFrequency(ref freq);
}
}
}
[/code]

In use, it looks like this:

[code lang=”‘C#”]
SimpleExecutionTimer set = new SimpleExecutionTimer();

set.Start();
PopulateSummaryGrid();
PopulateDetailGrid();
set.Stop();

MessageBox.Show(set.TickCount.ToString());
[/code]

Granted, you might want to do something a little more scientific than a simple MessageBox.Show, however this is enough to give us our first clue relating to the execution time of the two Populate…() methods.

Before optimisation
When I measured the execution time of the two Populate…() methods in my un-optimised application, the TickCount was 14,870,258. With a QueryPerformanceFrequency of 3579545, that made for a delay of over 4 seconds. Given that there were only 20 records being processed, 4 seconds is unacceptable, even if the results are correct. With a few other things thrown into the equation, we were at nearly 30 seconds as noted earlier.

The problem with augmentation
Of course, whilst the use of QueryPerformanceCounter/SimpleExecutionTimer is the need to add extra lines of code to your application. Removing the lines of code often means “commenting out” ot physically deleting the lines – both of which involve you touching your code thus introducing the possibility of accidental error. Perhaps more obviously, adding extra lines of code to your application necessitates at least a partial re-compile.

The problem: well, without going into detail, I’ll save that for another posting/review, it was compounded calculations.
Basically, I had a series of about 15 calculations like so:

a = CalcA(); // Iterates over a recordset performing up to 15 calculations per record
b = CalcB(); // Iterates over the same recordset performing a different calculation, again 15 times per record
c = (a + b) + 5% of (a+b);
d = c + 10% of c

My application required access to a and b on their own, but also required access to c. Simplicity and neatness meant that my calculations, whilst elegant, were repetitive and rather slow. Fortunately I had identified this up front, and wanted to move the application to “feature complete” before spending time optimising it. Thus when my application requested the value of c it actually went and recalculated a and b. Twice. And then we have the calculation for d…you can see the repetition taking its toll, however the calculations are reasonably elegant and obvious.

The solution: I created a new class that performed the calculations once and once only. With the problem identified and a solution in place, how long did the two Populate…() methods take? Well, the results were somewhat good: the TickCount was only 75,000…fractions of a second. So fast, the populating of the two grids appeared instantaneous. So much so, I could have disabled the code that turned the hourglass on and off! (But I didn’t!)

Code augmentation (adding extra lines of code) is not too bad if you are looking to perform a series of benchmarking exercises. For example, a few years ago I did some work that benchmarked the XML Document Object Model (DOM) versus the Simple API for XML (SAX). I wrote a small application that used QueryPerformanceCounter to monitor the results of loading small and large XML documents whilst taking into account the effects of caching. In this scenario, code augmentation prove to be rather useful, the code itself was never going to reach “Production” and was merely used for the benchmarking exercise.

Anyway, I hope that you’ll find QueryPerformanceCounter of some use in your applications.

Craig Murphy: author, blogger, community evangelist, developer, speaker, runner