Posts

Showing posts from 2025

Git Commit Messages - Why keep the 50 character convention?

Image
As developers, we should all know how to write a good git checkin message. Done right, it makes delving into change histories delightful rather than a chore.  What's that? You have never been told how to write a good checkin message? Chris Beam's article is just the thing for you!  (Chris explains the rules far better than I could here without going off on a tangent) TL;DR: Separate subject from body with a blank line Limit the subject line to 50 characters Capitalize the subject line Do not end the subject line with a period Use the imperative mood in the subject line Wrap the body at 72 characters Use the body to explain what and why vs. how But I want to talk about Rule #2 (with a little about Rule #6 thrown in). That 50 character first line limit. Rule 2. Limit the subject line to 50 characters Back in the Dark Ages, when monitors were typically 80 characters wide, this limit made some physical sense. You didn't want to wrap commit summaries because it makes them scra...

SDET - missing the point since the 1990s

Image
A term that I still hear occasionally is "SDET", or "Software Development Engineer in Test". It is also known by a few other terms - Developer in Test, Test Automation Engineer and so on. Essentially it is a job role for a developer whose main responsibilities are to write automated tests and frameworks to check software. And it is a truly awful case of missing the point completely . It is a job title that indicates a fundamental systemic problem in an organisation. First, some history. Back in the Bad Old Days of the 1900s, we used to develop software using a linear waterfall style process, mostly due to a bizarre and tragic misunderstanding of a single academic paper (Royce, 1970). There was a lot of management thinking that modelled software development as a factory production line, and assumed that there were cost reductions from specialisms - someone creates a software design, someone builds that design, and someone tests what has been written. At the end, som...

Testing false positives and negatives. Which one's which?

Image
  We all know tests need to be reliable (don't we?!). One of the worst things any test can do is give false results, whether consistently or intermittently. A test that passes when it should fail gives a false sense of security. A test that fails when everything is OK leads to wild goose chases trying to find the non-existent problem. This should be obvious. We call these false positive and false negative  results.  But just recently I have been thinking about  what is a false positive and a false negative?  It's obvious - until it isn't. So hopefully writing this down will clear it up in my mind, and some others. I'll add that this is my thinking right now after discussing the issue with several people - just make sure the definition inside your own company is consistent. Turns out, whether something is a positive or negative, false or otherwise, depends entirely on what signal is being detected . What is the 'positive' signal?  For testing, what is a tes...