Category Archives: Opinion

Core Values

During 2005, my last employer asked us to answer about 60 questions that revealed personal values (roughly speaking, the things that are important to us and guide our thoughts). Yes, it’s a little airy-fairy, but when the data from all of our staff from all of our offices was collated and presented graphically, it was rather interesting.

Firstly, there was a radar diagram representing a “Schwartz Chart”:


I believe that Schwartz, Shalom H. and Wolfgang Bilsky were responsible for this work. The Schwartz Value Inventory (SVI) contains a number of motivational domains. These domains reflect either an individualistic or a collectivistic interest dimension, or both, and they can be grouped into two dimensional structures composed of four higher order dimensions (openness to change, self-enhancement, conservation, self-transcendence) that are basic and bipolar. More can be found be following the references found here (worth reading if you want to make sense of the screenshots in this post).

This isn’t actually my radar diagram; if I can locate it I will update this post (I can’t seem to put my finger on it right now). To arrive at this diagram a number of employees were asked to complete a questionnaire comprising of about 70 or so questions. The questions were then used to determine the plot points on the radar diagram. The plot points relate to such things as: peace between people, broadminded, honest, honouring older more experienced others, respect for tradition, and so on, leading up to social recognition, meaning in work and choosing own goals:

Conservatism: national security, reprocation of favors, honoring elders, family security, respect for tradition, wisdom…
Intellectual Autonomy: curious, broadminded, creativity…
Affective Autonomy : enjoying life, exciting life, pleasure…
Egalitarian Commitment: social justice, world at peace, responsibility, freedom, equality…
Harmony: world of beauty, protecting environment, …
Mastery: successful, capable, choosing own goals, daring, independent ,…
Hierarchy: wealth, social power, authority…

There is a lot of moderately useful information present in the radar diagram. Further, it does demonstrate three things:

1) The organisational average (light blue, area)
2) The participant’s positioning (orange, line)
3) The standard deviation across the whole organisation (dark blue, line)

Secondly, there was a Values Categories Chart:


Now I realise that you probably can’t read these in detail, don’t worry, they are purely for demonstration purposes, I won’t be testing you on them later on.

During 2006, before I left this employer, we were asked to answer two questions based on the previous study:

1. “What should be the most important values in [your employment]? And Why?
2. “Choose 5 Values which you think should be the core values of [your employer] and will differentiate us from our competitors”

Here are my first-cut answers:

Question 1
I believe that the most important values that we should be nurturing and promoting are:

We must think out of the box. Regular, lemming-like, thinking just won’t do at all. If you stifle creativity, the morale of individuals and teams takes a hit and folks leave. Thinking out of the box, seeing the bigger picture and beyond will help us discover better, more efficient ways of delivering excellent service that is innovative, daring and award-winning.

Risk needs to be managed. Instead of stomping down on daring creativity and daring innovation, “these risks are too great”, open your eyes, accept that some risk is good. Risk that is accepted in a positive fashion will see teams and individuals work harder and smarter to ensure that they can achieve the dare and thus enjoy the success of a job well done. Stamp down on the dare (risk) and it will just serve to de-motivate.

Yes, some jobs, bread’n’butter jobs may not require much in the way of new thinking. However, the importance of new ideas, fresh creativity, taking a little risk for large gain all promote innovation. Clients like new ideas, they like to see folks “doing something out of the ordinary”.

Very few jobs are so simple that they require no learning (perhaps with the exception of some benign admin/overhead tasks). Individuals and team members should be given the opportunity to learn such that they can provide a better service that is more creative, more innovative and more daring.

Demonstrable evidence that the individual and team are able to do the job in hand.

As individuals working on a client project, we need to be capable of influencing and motivating; peer-group awards and qualifications suggest individuals are influential in their given sphere; team awards are even better. Don’t ignore awards from external organisations, if an employee has “done a good job” and been awarded for that job, recognise it.

Without influence and success, individuals and teams will struggle. A track-record has its place. Success comes from many places: being helpful, being influential, being positive, being supportive, being polite, being encouraging, being community-oriented, the list goes on.

We (not just IT) need to be seen to be bending over backwards to help our clients and fellow workers. And if we made a mistake, it’s helpful if we admit to that mistake right away and bring a solution to the table during that admission.

Similar thoughts to creativity – parochially-minded individuals need not apply. We need to be willing to accept new ideas, new thinking, what worked before might not be best now.

Choosing own goals
Don’t tell individuals and teams how to do their jobs. Let them get involved with the client, the project manager, let them prioritise activities in conjunction with the client. Don’t force them to accept stretch targets that you know they are unhappy with – promote communication from the ground up, it will increase morale and give the project a better chance of succeeding.

Question 2
Five core values:


I don’t know what became of the study, I left this employer just after I submitted my answers to these two questions. The study itself took rather a long time, spanning some seven or eight months (until my departure) and saw some staff hearing the phrase “disciplinary action” in order to gee them up into completing the original 60 or so questions (not me I hasten to add!). Who knows, may be the answers to the two questions provide some insight into who I am?

[Originally written January 2006, not posted. Revised April 2006]

How to win a by-election…

Perhaps not surprisingly (if you live in the locale) the Scottish Liberal Democrats ousted Labour in the Dunfermline and West Fife by-election. The amount of by-election paraphernalia that came through my letter-box was astounding. Some nights we even got two deliveries from the same party. The paper-recyclers will do well this month.

How did the Lib Dem candidate Willie Rennie overturn a huge Labour majority in what had been a safe Labour seat?

Leaflet drops, and plenty of them. Make sure people don’t and can’t forget your name, that’s the message that came across. And because the general voting public like cartoons, Willie associated himself with Oor Wullie – sometimes 2-box sketches, other times 3-box. Dubious copyright issues appeared to be noted and brushed aside. The quality of the leaflets left a lot to be desired, some even looked as if they were the infamous “copy of a copy of a copy, etc.”…despite this, folks remembered his name and voted for him. I’m glad, because my mind often goes blank when presented with a list of names on a ballot paper…

It’s a sceptical thought, however it’s a tactic that was deployed and worked…it got him 12,391 votes.

Avoid duplication of effort, use technology, increase profit…

QSNews, Friday 13th January 2006 carried, amongst others, a very interesting article in their Comment section: Nobody wants my quantities, by Robert Klaschka of architects and designers Markland Klaschka.

In his article, Robert laments about how Business Information Modelling hasn’t seen the uptake that he feels it deserves. He is particularly annoyed with the amount of repetition that revolves around moving data from its paper source to a useful electronic medium:

legions of architects who find refuge transposing from CAD to spreadsheet there are also battalions of surveyors out there wielding scale rulers

I recall part of a quotation from somebody whose name I cannot remember, it went along the line of this: “you should never have to type in a piece of data more than once” – except the author went to the extreme and cited even the smallest examples as candidates for cut’n’paste (or re-use via whatever means is available). I was young when I read this quote, but even then I knew there was some truth in the quote, when applied in the correct scenarios. Robert’s scenario above is one that I consider appropriate. Over the years, I’ve seen a lot of data being re-entered, being transposed from a paper notebook to a spreadsheet, spreadsheets checked against CAD drawings, even the paper notebooks being checked and checked again. There has to be a better way. There is: make use of information technology, apply it, invest in it, listen to what folks have to say about the successes of information technology. It can save you time, money, make you more efficient, introduce staffing economies and ultimately increase your profit.

Asset Valuation – The Appliance of Science
Given that Radio Frequency ID (RFID) technology is now so economic, there are many firms using RFID tags to simply the data collection during an asset valuation. Performing an asset valuation manually, say for an oil field spanning the entire length and breadth of a country that is 90% desert, can be very costly. It could be even more costly if you have to use senior engineers who know what their are looking for in the way of flare stacks, columns, storage tanks, etc. And it gets worse if different engineers use subtly different terminology to describe the same item. Just think how much time/effort could be saved if RFID tags were attached to various asset items…no longer would we need the senior and expensive engineer jumping in his Dodge for a trip over the desert (we drove Dodge RAMs in the Sahara desert in Libya, other all-terrain vehicles are available).

I plan to write more about the use of RFIDs in future blog posts. In the meantime, there’s a good write up about RFID applications here.

our Dodge
[August 1990: an empty shack in the middle of the Sahara, our Dodge and Jim the driver: far right of the shot – we followed the “tyre route” for a couple of hours, then followed the dunes for a while after the tyres had disappeared, Jim knew where he was going!]

Of course, RFIDs aren’t just useful for desert-based surveys, they’re very useful in more traditional survey environments, such as schools. Since 2003 I’ve been working on and off on a survey application that is used here in the UK. It’s not rocket science, but it works well enough for the client to use our services over and over again. It’s a regular Win32 desktop application written using Delphi 6. We’ve often thought about re-basing the application on a PocketPC device…then we could send the surveyors into the schools armed with a Dell Axim (or similar, other PocketPC devices are available!) to capture the survey data “live”. Even better, add a digital camera to the PocketPC device and the surveyor can take photographs of parts of the school in need of further attention. This has the advantage that we don’t have to send multi-discipline engineers out to survey each and every school. If a surveyor isn’t all that skilled in “costing up” the damage caused by dampness, he can take a photograph of the damage and take it back to the office for a damp-expert to examine. Similarly, if we used RFID tags to identify schools, blocks within schools, rooms within blocks, the amount of data entry can be reduced significantly.

testGoogle are helping us too.

Google Earth is an amazing tool that lets us zoom in on various parts of an oil field – it’s so powerful, we can use it to identity new asset items that weren’t present during the last survey/valuation. Of course, if you know that there are new asset items, you can then improve your estimate of how long the valuation will take and how much it will cost – this tool removes some of the surprises! In some areas of the world, Google Earth is powerful enough to let us read the numbers on airport runways. It’s also powerful enough to provide us with geographical coordinates (latitude and longitude) as well as the elevation at various areas within the image – elevation data can help plan how long a survey might take and may allow some optimisation of routes knowing the precise elevations involved.

For example, on the left [hopefully] you can see a location in the UAE desert. From this image we can gleam a few useful facts. Firstly, has the plant expanded since we last looked at it? A quick visual check will tell us. Secondly, has anything been added or moved around? Thirdly, has anything disappeared? Armed with these basic facts, we can prepare informational reports the on-site surveyors can make good use of. Even if such images are just used as project management tools or site plans (the originals of which are often out of date and/or unavailable), their use can be a great time-saver. As they say, “a picture speaks a thousand words”.

A word of warning however, Google Earth is rather addictive!

Robert’s right in what he says, the importance of technology in today’s non-IT disciplines cannot be understated. If you are lucky enough to have pro-active skilled IT people in your non-IT, shop, see if you can get them working for you, give them a challenge, make them part of the business, listen to their ideas. The synergy that this fusion might yield, could be a catalyst in your business and could see enhanced relationships with your clients, increased turnaround, fewer staff required to complete a job in a shorter space of time…and of course, increased profit as a result of increased efficiency.

“the source code is the ultimate documentation”

The BBC reports (here too) that Microsoft is to allow access to the source code to selected products. This interesting move will hopefully satisfy the competition commission who are pushing Microsoft to provide more documentation for their products such that vendors can make their own products more compatible. So, rather than provide written textual documentation, Microsoft are saying that their code is their documentation. That’s true enough, gone are the days when developers had to maintain what amounts to two sets of documentation: the code and the written documentation that went with the code. With customers crying out for updates and bug fixes, it’s not difficult to guess which activity is ignored…updating the documentation.

Even comparatively recent, with the divergence of the code and the textual documentation, along came many attempts to integrate the documentation into the code, which is fine, just so long as it’s possible to “hide” the integrated help as it frequently gets in the way during the development process. I have to admit to disliking the bulk of automated documentation that I’ve seen so far: it’s either very much incomplete or there isn’t enough of it – a single line of documentation for a method isn’t really enough. So it’s for this reason that I prefer to treat the code as the best way of gaining an understanding of how something really does work. Remember that documentation goes out of date, folks don’t update it as frequently as they update the source code, the only surefire way of guaranteeing that you’re about to do the right thing, is to look at the source code.

Microsoft’s legal chief, Brad Smith goes as far as saying

the source code is the ultimate documentation…It should have the answer to any questions that remain

However, Neelie Kroes, the competition commissioner disagreed:

“Normally speaking, the source code is not the ultimate documentation of anything…”

“[This is] precisely the reason why programmers are required to provide comprehensive documentation to go along with their source code.”

I’m afraid to say that I disagree with Neelie’s last statement. If I was building a product, whether it is software or hardware, and I was integrating it with a Microsoft product (or any software vendor for that matter), I would be happier with the source code rather than a large textual document. Yes, I would like an architectural overview of the system that I’m looking to integrate with, but that need not be more than a few pages and should be graphical in nature. I know that I’m not alone here: how many times have you been working with a product, following the documentation to the letter only to realise (hours later) that the documentation is factually incorrect? We’ve all been there! Frequently, documentation is created by a separate department, with minimal input from the original programmers. Or, the original programmers write the first draft of the documentation, then “the editors” take over an apply their magic…often changing the meaning or interpretation of something critical on the way!

If programmers are required to provide comprehensive documentation, then project managers/customers should allow the programmers sufficient time to create high quality documentation at the outset, and provide them with time to update it. Sadly, in my experience, documentation is one of the things project managers treat as contingency time, or it’s one of the first things that the customer insists is dropped from the project (“you can easily write the documentation later, can we have this extra feature instead?”)

Jack Reeves first raised this idea way back in 1992 when he published works such as What Is Software Design?, What Is Software Design: 13 Years Later and Letter to the Editor. Click here for more information about these essays.

“the source code is the ultimate documentation”, something not missed by this publication that is at least 16 years old:

agile world

Related posts
The code is the design

On blogging #1 – Outbreak of blogs forces rivals to take notice

In this, the first in a series of postings in the category “On blogging”, I’ll take a look at a number of issues surrounding blogging. Notably, this posting will touch on blogging as a marketing device and employee/corporate blogging. Subsequent posts will discuss these topics, and others, in more detail.

Are blogs the end of media, marketing and advertising as we know it, or vanity publishing that will eventually suffocate under sheer weight of numbers?

Earlier this year, Barry Dorrans introduced me to the work of cartoonist Hugh MacLeod. I was therefore pleased to see a piece in The Guardian of 28th November 2005 carry a sample of his work. Jennifer Whitehead’s piece Outbreak of blogs forces rivals to take notice caught my eye for a couple of reasons. Firstly, because as regular readers of this blog will know, I’m a great believer in the power of the blog: amazing things can be achieved using a simple blog posting, then letting the readership propogate the message – DeveloperDeveloperDeveloper was “marketed” in such a way. Secondly, because Barry had mentioned Hugh’s name to me, it was in the article along with a photograph of Hugh presenting on of his business cards.

Whitehead’s article re-iterates blog statistics which are generally available via Technorati. With the number of blogs being created increasing at an amazing rate of knots surely there comes a time when we’ll reach critical mass? Well, I don’t think so, not yet at least. Yes, such is the profusion of blogs today, and that number may well double in six months time, but when it comes down to it, it’s all about filling a niche or as Hugh puts it, a gaping void. Therefore, it’s all about service profusion, information and quality. Between you and me, most of the blogs that are popping up every 1/3 of a second are likely to be what Whitehead refers to as vanity publishing. So, apart from adding to the burden of the already index-heavy search engine fraternity, which means we’ll have to spend a little more time weeding out the dross in search results, we’ve got little to be worried about. Of course, in time, the search engines will improve beyond our wildest dreams and this “de-drossification” of search results won’t be necessary.

But, every so often, along comes something that fills the gaping void. If you put enough monkeys in a room with a typewriter [surely Microsoft Word? – Ed.] and give them an infinite amount time, they will, so I’m told, eventually write something akin to the works of Shakespeare. Perhaps that’s a somewhat grandiose claim, but nevertheless, if enough blogs are created, eventually some of them are going to fill the void and become killer blogs. A killer blog has all the potential to realise serious damage to existing marketing channels and has the ability to engage customers almost at a one-to-one level. Is your marketing department capable of that? I didn’t think so. The killer blog puts the employee or person who actually performs the work directly in touch with the customer or customers, and in a way ever so different from traditional e-mail. For example, when was the last time you wrote (either on paper or via e-mail) a letter of complaint? It’s likely that your letter was kept strictly “in house” when it arrived at its destination. The response may well have been tokenary in nature: “we’ve never had a problem like this before, here’s a gift voucher…” However, the advent of the Internet changed all that. One only need to search for “xyz” sucks to learn more about how other people view “xyz”, a point not missed by Clarke Ching.

Naturally, this can work to the advantage of the customer. Whitehead notes that Antony Mayfield (director at Harvard Public Relations), whose own take on Whitehead’s piece can be found here, noted that the recent Apple iPod Nano “screen scratching” problems were first reported via blogs. With such widespread appeal of gadgets like the iPod, the publicity offered by such blogs carries some awesome power. In the case of the iPod Nano, a gadget whose rise in popularity was somewhat spectacular, the problems reported via blogs forced its manufacturer, Apple, to quickly admit that there were problems with some screens and a product recall was put in place.

Whitehead’s article also makes mention of Cillet Bang’s made up celebrity/personality Barry Scott. It seems the advertising folks hired by/at Cillet Bang decided to take advantage of Barry Scott’s blog for their own marketing gain. Except they stepped over the line. Using the name of Barry Scott, the left a message for one Tom Coates whilst he was looking for his father, who he had not seen in almost 30 years. Described as clumsy and a new low for marketers, Cillet Bang have demonstrated the care that needs to be taken when moving into on-line marketing, particularly where blogging is concerned. It’s no wonder that Barry Scott’s blog has seen very few postings during the latter part of 2005. The big blog company’s Adriana Cronin-Lukas sums it up rather well in her Cillit Bang clanger blog post. Stephen Newton also presents the case somewhat succinctly.

On the other side of the fence, employers and business in general must be careful about what what its own bloggers are saying, both during working hours and in their personal time. Take the unfortunate case of Joe Gordon who lost his job at a famous booksellers after they deemed his private posts too sensitive. I see that Joe is promoting the Committee to Protect Bloggers, a group who are devoted to the protection of bloggers around the world. It’s important for employers and business to protect their asset and in the occasional extreme case, yes, you may find the occasional blog posting that goes against the grain or is close to the bone. The action you then take will be what sets you apart from your competition. Take a heavy handed approach, whereby you are essentially punishing the many instead of the few, and you may find it affects your business in ways that you may not understand. However, take a softly softly approach, lay down some basic blogging guidelines (not rules), encourage your teams (employees) to blog and interact with customers and it is very likely that you will open up entirely new marketing streams that you never knew existed. The commercial value of a blog (even a corporate blog) is on the increase. There are even genuine cases where people are being recruited to fill the position of blogger, i.e. get paid to blog – Microsoft’s Robert Scoble is a case in point.

Blogging is here to stay. It is a powerful marketing device. How you use it is up to you, but use it with care, use it properly and you will discover its power. Your rivals are likely to be examining the blogging space already, if you’re not taking heed of blogging now, catch up may prove to be difficult.

Related posts
Blogging as a marketing device
Bloggers can be sued…
TechEd 2005 – IT Blogging
TechEd 2005 – MS IT Microsoft s Blogging Engine – Construction and Delivery
TechEd 2005 – E-mail or blog?
Blogging: does this signify the end of NNTP?

Quote: Jeremy Clarkson

During my recent travels, amongst other things, I was reading Jeremy Clarkson’s book The World According to Clarkson – it’s an excellent and humorous read, I can strongly recommend it. It had me laughing out loud…on board a fairly full easyJet 737-700 en route from Bristol to Edinburgh.

The book is a collection of his columns from The Sunday Times; I found his take on “facts” rather amusing:

The More We’re Told the Less We Know
…if you have all the facts to hand, you will see that there are two sides to every argument and that both sides are right. So, you can only have an opinion if you do not have all the facts to hand.

This would explain why we find ourselves listening to a lot of opinion. Very few people have all the facts, even if they are under the impression that they do – they are most likely confusing inadequate or inappropriate information as being correct and factual.

Quote: Boris Johnson

Boris Johnson’s column on page 22 of The Daily Telegraph, 24/11/2005, the subject matter of which is not something I wish to discuss on this blog (but here’s a clue), carried an excellent quote:

If we suppress the truth, we forget what we are fighting for, and in an important respect we become as sick and as bad as our enemies.

In some respects this can be applied in the work-place too. It’s important not to suppress the truth at work, otherwise employees become disillusioned, communication becomes garbled and some folks end up joining the “if you can’t beat them, join them” camp – something that is wrong on so many counts.

Related Posting: #10 – The truth is best…admit it…

TDD, Visual Studio 2005, Scott Bellware’s blog post

In response to the MSDN Test-Driven Development Guidelines, Scott Bellware has an interesting rant over here.

Update: before I could post this entry, the MSDN posting had been removed, luckily I saved a copy elsewhere. Since I’ve just spent the last few minutes writing this, I’m posting it anyway. Here’s what’s there now:

Guidelines for Test-Driven Development
This topic is obsolete and has been removed from the MSDN documentation.

It would appear that the MSDN posting has hit a nerve in the TDD community, and on first impressions, rightly so. I’m about to embark on a couple of days travel, I’ll be taking some of the material surrounding this posting and rant with me to read, expect to see me follow it up later this month.

To pick on a few posting (on the off chance you’re not following this rant elsewhere): Scott Dockendorf steps up and denounces any involvement in the article over here. Agile guru Roy Osherove is very vocal about the article in his Microsoft fails miserably to explain or promote Test Driven Development in Team System posting. And Julian M Bucknall is all depressed about it over here.

On a separate note, and it’s perhaps why I associated this posting with my “Opinion” category (not that this counts for much, but I feel as if that category lets me write what I like and not get sued! That’ll be right!), as an MVP should I be seen to be criticising Microsoft? Ignoring the relationship that exists, if an organisation did something that was wrong (in the eyes of the majority), I would feel professionally obliged to point out the error in their ways. Even after nearly a decade in the same organisation where such an attitude is pretty much frowned upon, I still maintain this professional desire to see things done properly. However, again on first impressions, I see that the MSDN posting is rapidly becoming MSTDD, so at least the differentiation is there. Hopefully all the TDD newbies will catch up on TDD postings prior to embarking on what amounts to a shoe-horned flavour of TDD.

More when I return from my travels.

Here’s the original MSDN posting:

Visual Studio Team System
Guidelines for Test-Driven Development
If your software-development project uses test-driven development, or TDD, you can benefit from features of Visual Studio 2005, and in particular, features of Visual Studio 2005 Team System. These features include the unit test type of Team Edition for Testers, and especially the ability to generate unit tests automatically; automatic refactoring capabilities that are introduced in Visual Studio 2005, and the Class Designer tool.

The unit test support of Team Edition for Testers is particularly suited to TDD because these Team System testing tools can generate tests from a minimum of production code.

Process Example
In your TDD project, you might want to follow these steps:

  1. Define the requirements of your application.
  2. Familiarize yourself with the feature areas of your application, and decide on a single feature, or the requirements of a feature, to work on.
  3. Make a list of tests that will verify the requirements. A complete list of tests for a particular feature area describes the requirements of that feature area unambiguously and completely.
  4. File work items for feature requirements and for the tests that need to be written.
  5. In Visual Studio, create a project of the type you want. Visual Studio supplies the initial production code in the form of files such as Class1.cs, Program.cs, and Form1.cs, depending on the project type.
  6. Define the interfaces and classes for your feature or requirement. You can add a minimum of code, just enough to compile. Consider using the Class Designer to follow this step. For more information, see Designing Classes and Types.
  7. Note
    The traditional TDD process does not contain this step. Instead, it advises that you create tests first. This step is included here so that, while creating tests, you can take advantage of two features in Visual Studio 2005 Team System: the GUI design capabilities of the Class Designer, and the automatic test-generation capabilities of Team Edition for Testers.

  8. Generate tests from your interfaces and classes. For more information, see How to: Generate a Unit Test.
  9. Compare the tests that have been generated with the list of tests you prepared in step 3. Create any tests that are missing from the list you wrote in step 3. For more information, see How to: Author a Unit Test.
  10. Organize your tests into test lists. You can, for example, base your lists on test use, such as check-in tests, BVTs, and tests for a full-test pass; or on area, such as UI tests, business-logic tests, and data-tier tests. For more information, see How to: Organize Tests into Test Lists.
  11. Update the generated tests to make sure they exercise the code in ways that cover the requirements that you have defined.
  12. Run your tests. For more information, see How to: Run Selected Tests.

    Verify that all your tests fail. If any test produces a result of Inconclusive, it means that you did not update the generated code to add the appropriate verification logic for that test. If any test produces a result of Passed, it means that you did not implement the test correctly; this is because you have not implemented your production code, so no tests should pass.

  13. Implement the interfaces and classes of your production code.
  14. Run your tests again. If a test still fails, update your production code to respond. Repeat until all the tests pass.
  15. Pick the next feature or requirement to work on and repeat these steps.

On contractor value, internal vs external development

Earlier, I wrote about my thoughts surrounding what is essentially outsourcing of a given function…thoughts that were firmed up after I had been reading Joel On Software.

I’m pleased to see Richard Jonas follow it up in his Internal or External posting. I have to agree with Richard’s comments, the decision to “in-source” or “out-source” on a project-by-project, or function-by-function, basis has to be made using all the information available (I can’t think of situation where this wouldn’t be true, although we all see such decisions being made without such consideration). And, it is important to note that some projects/functions are inherently meant to be outsourced.

My original posting was prompted (apart from via Joel), by a letter in Computer Weekly dated 20th September 2005 on the subject of “Do contractors offer better value for money?” The author notes that contractors rates of pay are typically twice that of permanent employees. He goes on to state that hidden costs such as bonus, pension, employer’s national insurance, recruitment fees, cost of human resources, accounts, payroll, taxes, holiday and sickness actually make a contractor better value of more flexible. Are these costs really hidden? Surely most businesses know exactly how much it costs to have an employee sit at their desk carrying out their function? No? All these hidden costs, the opportunity cost of having somebody else sitting at the same desk carrying a potentially more profitable function, etc. Surely? Clearly there are two sides to scenarios like this, the author of this letter sits on the side of contractors, whereas I seem to sit on the side of permanent employees…

On the premise that hiring a contractor is analogous to outsourcing a project/function, let’s follow that notion for a while…

Are contractors better value and more flexible? Insofar as the contractor doesn’t [usually] get embroiled in the internal politics that some many permanent employees do (although not through choice I might add), the contractor is free to focus on the task in hand, they can spend 90%+ of their working day on the project/function. Therefore, on the surface, they appear more productive that permanent employees. They’re getting the job done faster, possibly better, but not necessarily cheaper – twice the cost remember, folks will see that as being “more expensive”, unless they are able to understand that the contractor may have taken less time to complete the piece of work (although the work is unlikely to be completed in 50% of the time it would take a permanent employee to complete).

Unless the contractor is able to “hit the ground running”, it’s very likely that some mentoring will be required. Mentoring is often a service delivered by a permanent employee…their costs must be factored when assessing contractor value. Indeed, there’s often an element of re-work involved if a mentor or supervisor is involved. Despite best efforts, I’ve seen the work of contractors being “finished off” by permament employees. Similarly, I’ve seen mentors/supervisors grimace at how long recently in-bound contractors take to complete a known to be simple task (this is more obvious where the mentor/supervisor would normally have performed the job the contractor was brought in to do).

Where the function/project involves the development of a piece of software, it’s very rare (in my experience) that the ultimate end customer actually uses the software the day they are issued with it. The day that they are issued with it may well be the contractor’s last day, in which case who fixes the customers snags? This is less of problem where the customer has wanted to or has been involved in the development process all along, something which is very desirable but rarely happens in traditional software development shops. Often the mentor has to “pick up” the customer’s snags and fix them. This is a double-edged sword. The mentor is nowhere near as familiar with the code-base as the contractor. This will lead to increased start-up time, i.e. the snag will take longer to fix. It will also lead to reduced quality, the mentor may think that they have fixed the snag without realising their fix may have repurcusions elsewhere (yes, practicing test-driven development and having a suite of tests would help, but we’re talking about traditional software development, not agile!)

Contractor flexibility is evident as a suitably qualified/skilled contractor will lend themselves to any platform, any language and their curriculum vitae will be a work of honesty, not fiction. However, the business may find itself paying a contractor to learn a new skill that puts the said contractor in a more marketable position for their next contract. Not good. Similarly, the said contractor completes the function/project then moves on to a new contract. Whatever the contractor had to learn in order to complete the function/project was a skill that went with them…that may well have been a potentially proprietary skill that would have been better value if it remained “in house”.

I’m standing by my original thoughts: if a piece of work, a function or project is considered “core” to your business, subject to the decision making process noted above and in Richard’s post, it should be kept in-house, i.e. internal. Proprietary intellectual skills that give you a business advantage, give you a means of adding value, give you the edge, should remain in-house.

Joel#1: Better, Faster, Cheaper

In an earlier posting I mentioned that I had been reading Joel On Software and that I might make the odd reference to it. Well, here we go…

For me, Chapter 35 (you can read the original posting here) has something very important to say:

If it’s a core business function — do it yourself, no matter what.

Actually, that’s a succinct way of putting it: the remainder of this post merely pollutes the eloquence of Spolsky’s quote (sorry!)

Spolsky suggests that we pick our core business competencies and goals, and do those in-house. Of course, this does suggest that you’re capable of identifying your core business competencies in the first place (although I’m sure that there are plenty of consultancies who will offer to help you identify them…for a fee and a retainer). What this means, in a nutshell, is this:

1. Keep all the good stuff that you do close to you, i.e. in-house

2. All the bad stuff that you have to do in order to do the good stuff, get somebody else who is good at it to do it, i.e. outsource it or find ways to do it better, faster, cheaper (more here)

Identifying the good stuff is simple: it’s the stuff that makes you money – it’s the stuff that’s on the invoices that you send to your clients. Invoices – that’s important too: don’t classify your invoicing process as part of the bad stuff. It’s a process that should be lean, agile, flexible and very efficient. The earlier your invoice goes out, the sooner you’ll get payment in (I know this is obvious, but I’ve read about some organisations who don’t understand this concept).

The bad stuff is anything that costs you money, and it’s something that you should optimise as much as possible. If you are able to reduce the cost of the bad stuff, it’ll make the profit margin on the good stuff even sweeter. This means that you’ll have to identify bottlenecks: any process that involves a single-point-of-failure (this might be a single person being responsible for dealing with too many requests, or it might be a process that involves double or triple handling of decisions/information) is a candidate for outsourcing or optimisation (faster, better, cheaper). It’s important to recognise that outsourcing the bad stuff might not be cost-effective, e.g. don’t outsource your IT function

However, it’s also important to recognise that some of the good stuff relies on the services and knowledge from secondary functions (these functions often produce a “value add service” that augments the client offering, sometimes to the extent that the good stuff can’t take place without the value add service). Once you’ve identified these secondard “good” functions, you may wish to start doing more of them (which should equate to either more profit or more client satisfaction: both are good, but there is a trade off – you need to get the balance right).

An example
If you are a hypothetical company that specialises in building industrial weighing equipment (perhaps for weighing a lorries and trucks), that’s your primary function. Your secondary function might be developing software that incorporates some intelligence into the weighing process, it might know about a lorry’s unladen weight for example. The secondary function might also extend into the client’s own management information system, perhaps feeding it with important information that can be used to verify the lorry owner’s credit worthiness, generate an automatic invoice (I told you they were important), post to ledgers, etc. You might think that it’s not part of your primary function, but the value add from being able to integrate with the client’s system is part of the good stuff – it’s something that should be nurtured and extended.

Spolsky’s closing paragraph presents a health warning that may have more truth in it that you believe:

The only exception to this rule, I suspect, is if your own people are more incompetent than everyone else, so whenever you try to do anything in house, it’s botched up. Yes, there are plenty of places like this. If you’re in one of them, I can’t help you.

If that rings a bell with you, then I too can’t help you. Sorry.

Early attribution of blame…

It’s amazing how quickly the Americans want to blame “somebody” for the terrible devastation caused by Hurricane Katrina. Blame – is it a cultural thing? Is it the American way?

Whatever it is, early attribution of blame seems to the order of the day, as reported in today’s Guardian: Bush team tries to pin blame on local officials. I don’t think anybody could have predicted the scale of the devastation, so why bother trying to blame somebody now? Just get on with solving the problem, that’s the real order of the day.

However, there is the other side of the coin, and it manifests itself in a plea to postpone attribution of blame. Good call, I’d even go a step further and postpone it until after the rebirth of New Orleans and the other affected townships.

Early attribution of blame isn’t going to help anybody. Instead of pinning blame on somebody, greater focus should be made to solve the problems of the moment: rebuilding the land-based infrastructure, creation of jobs, restoration of law and order and an examination of the damage done to the offshore facilities that provide America with much of its oil. It’s basic project management, solve the problem before going into post-mortem mode (even then, is it really worth it for all projects?)

On another related note, I read with interest a letter in today’s Guardian:

Donald Rumsfeld declared the looting in Iraq following “liberation” to be the consequence of “the pent-up feelings that result from decades of oppression”. We await his wisdom on New Orleans.
Chris Mazeika, London

My interest stems from wondering just how Mr Rumsfeld might answer were he asked to comment on this letter that quotes him to the letter.

Whilst it’s not the best news to read, I’m glad to see that blogs are rapidly playing their part in news provision: Hurricane Katrina – blogs and links