Friday, 12 February 2016

Time for more murder

After some very positive responses to my first Stephen Capel murder mystery I'm delighted to say that the second book, A Timely Confession, is now available.

In the sequel to A Lonely Height, we find Stephen Capel settled into his first parish in the village of Thornton Down. As Christmas approaches, an unlikely confession of murder throws Capel into a complex and dangerous investigation. A software developer has been killed just before the launch of a make or break new product. While trying to help those left behind, Capel is pulled into the mystery of who really killed Mark Nelson. As Capel attempts to cope with an upheaval in his private life and to help those whose lives are torn apart by a second murder, he must search the snow-covered streets of Bath for answers before another victim dies.

It's just £6.99 for paperback or £1.99 on Kindle.

Nothing like a good murder to get you through the winter...


Thursday, 11 February 2016

What is a paradox? It's paradoxical

Is the definition of a paradox paradoxical? Before we get into a philosophical spiral, this thought was inspired by a complaint I received when I published a review of a book about the Fermi paradox. Mark Hogarth remarked 'They'll call anything a paradox these days.' When I pointed out the name Fermi paradox dates back to the 50s, he responded 'Yes, I know, but you gotta agree that the word 'paradox' is rarely used properly... here it's just a puzzle, like the twin 'paradox'. Now Russell's paradox - that IS a paradox.'

So was Mark right? Have many of us (me included) been using the term incorrectly? So you don't have to, I delved into the trusty source of all things wordilicious*, the Oxford English Dictionary. And got quite a surprise.

Apart from an obsolete usage, the dictionary's first definition is the one that I use - a statement that appears to contradict itself or be ridiculous, but which turns out to be well founded or true. Rather confusingly, the second definition is an almost opposite version which I know some people use, making a paradox a not-so-obvious fallacy. And the third definition is the logician's version, which makes a paradox an argument that appears to be sensible and based on logical principles, but which leads to a conclusion that is, as the OED coyly puts it 'against sense.' Like Russell's paradox** or the rather crude example I've used as an illustration above.

However, what is really interesting is that (apart from the obsolete meaning) my definition has the oldest citation, going back to 1569, the negative definition is almost as old, with the first example dating from 1570 - but the logician's definition has no evidence before the 20th century (in fact Bertrand Russell himself in 1903 is the first  usage they know of).

So, interestingly, Mark's complaint was back to front. It's not that they'll call anything a paradox these days, but rather that logicians have taken a word with a long established meaning and (relatively) recently given it a different one. Don't you just love words?

* Sadly, 'wordilicious' isn't in the OED. But it ought to be.

** Very crudely, Russell's paradox, which requires some basic set theory goes something like this. Imagine we've got the set of all sets that are members of themselves (lets call it the SELF set). So, for instance, the set 'dogs' is not in the SELF set, as the set 'dogs' is not a dog. But the set 'things that aren't dog's is in the SELF set, because it isn't a dog.

The paradox arises when we consider the set of things that aren't in the SELF set. Is that set in the SELF set? If it is in the SELF set, then it isn't in the SELF set - which doesn't make sense. But if it isn't in the SELF set, then it's not a member of itself, so it is in the SELF set - which also doesn't make sense. But it does make your head spin.

Wednesday, 10 February 2016

Stop Teslaing me

When I ask not to be Tesla'd, I am not referring to the electronic stun guns of the TV show Warehouse 13, but pointing out that I really would prefer it if those of you who like to put 'memes' (yuck, horrible word) on Facebook would stop sticking up the kind of guff illustrated on the right, which purports to be Tesla 'describing a cell phone.'

There are a number of problems with this. One is that Tesla didn't have a good grasp of electromagnetic radiation, nor did he accept quantum theory, so he would have had serious problems with the mechanisms required to make a mobile phone work.

More to the point, though, Tesla's handwaving remark was in a long tradition of broad predictive comments which certainly show that an individual is open minded, but do not necessarily indicate that they are inventing something ahead of its time.

For instance, I dearly love the thirteenth century friar Roger Bacon - so much so that I've even written a book about him. But no one sensible would suggest that Bacon understood television or aircraft. If, however, I use the 'Tesla foresaw X' approach, he seemed to predict both. In one short burst in a letter to an acquaintance, for instance, he listed self-powered ships, the horseless carriage, the flying machine, something that sounds like a pulley system, and a diving suit or diving bell, most of which would not become practical for another 600 years.

Elsewhere he wrote:

We may read the smallest letters at an incredible distance, we may see objects however small they may be, and we may cause the stars to appear wherever we wish. So, it is thought, Julius Caesar spied into Gaul from the seashore and by optical devices learned the position and arrangement of the camps and towns of Brittany.
Such devices are very unlikely to have existed in Bacon's day (let alone Julius Caesar's) - the first microscopes and telescopes would not come along for about 300 years. And to make some of Bacon's claims true would need TV rather than a simple optical system.

Admittedly it's possible that Bacon achieved something like a crude telescope and/or microscope by messing about with lenses - but certainly nothing capable of what he described, nor would he understand how to do it. Bacon was not 'describing a telescope' any more than Tesla was 'describing a cell phone.' Tesla was speculating, given the knowledge of the time, as many others did, what was possible in principle. And that's a very different thing. Nikola Tesla was one of the greatest electrical engineers ever, but this kind of myth building doesn't do science any favours.

Tuesday, 9 February 2016

The book event horizon

Mostly books I won't re-read, as these
four shelves are books wot I wrote
Whenever we have tradespeople in the house, they tend to point to my wall of books and say something like 'Someone likes reading.' Leaving aside the sad reflection that having a few bookcases is now comment-worthy, it brings to mind another slightly depressing thought.

I probably have about 1,000 books on the shelves (I halved the collection when we moved 6 years ago, but it grows back), almost all of which I have read, but I keep them in case I want to re-read them.

Now, let's say I've got about 20 years of reading left in me. I read about 60 books a year, but of those 2/3 are new. So that's 20 re-reads a year. So, realistically, at maximum, around 400 of those books are going to be read again. Which seems a shame.

Note that I'm not advocating throwing most of them out. I don't know which I will re-read, and I want a choice. (A friend recently said he re-reads The Lord of the Rings every year. It seems such a waste if you do have a book event horizon looming.) However, what I have started doing is if, like the book I'm re-reading at the moment, I think 'I'll certainly never read that again,' I do dispose of it, if only to make a bit more room on the shelves.

Monday, 8 February 2016

Black hole firewall paradox? Frankly, my dear, I don't give a damn

Image based on NASA image, credit ESA/NASA/SOHO
As someone who writes about physics and cosmology I occasionally get asked my opinion on something like the black hole firewall paradox. If I'm brutally honest (which I rarely am, because I'm far too polite) I will reply: 'I don't know. I don't care. It bores me stiff.'

In case you aren't sure what the paradox is, it emerges from a combination of quantum theory and general relativity (which don't go together, but hey), and relies on piling about four levels of mathematical supposition on top of each other to come to the conclusion that the information that could be considered to exist on the event horizon of a black hole can't (as it was hypothesised it did) represent all the information in the 3D interior with gravity included, and 'therefore' something passing through the event horizon would burn up. Simples.

This topic involves theorising about a phenomenon that almost certainly doesn't exist in the real universe, using physics that almost certainly doesn't apply. Now, medieval theologians are often maligned by suggesting they wasted their careers arguing how many angels could dance on the head of a pin. They didn't - it's a myth. But physicists really have spent a reasonable amount of time and effort on this modern day equivalent. 

Personally I'm much more interested in science that helps us understand phenomena we know exist than I am in mathematically driven flights of fantasy. Show me some observational or experimental evidence for a firewall and I will get excited. But stare at your navel and make it up and I really don't care. 

Don't get me wrong. I'm not saying that theoreticians should be prevented from playing around with these ideas, just as mathematicians shouldn't be stopped from thinking about mathematical structures that have no relevance to the real world. But I do think us science writers give far too much exposure to this kind of thing.

So, how many angels do you reckon could dance on the head of a pin?

Monday, 1 February 2016

What would you add?

I have just been moved into a new office at Bristol, with the luxury of a whiteboard, which I felt I ought to fill with amusing/meaningful quotes on writing etc. What would you add?

The strange case of ethnicity and nationality on the screen

I was thinking on my walk to university about how different modern screen actors are from those in my youth. Back then, any attempt at a different accent was fraught with difficulties. I have to confess to having a bit of a thing for Hayley Mills when I was about 11, but I found it hard to forgive her for her attempts at Yorkshire and American accents. And who can forget the 'delights' of Dick van Dyke's cockney? Yet now you never know if an actor is Australian, American or British - they all seem to do accents near-perfectly.

However, that is only indirectly the topic of this post. We rightly are now repelled by white actors 'blacking up' for non-white roles. Try watching Peter Sellers or Spike Milligan doing 'Indian', for instance. And I can totally understand the raised eyebrows when a white actor was recently cast as Michael Jackson. But why, I wonder, do we ignore other situations where actors pretend to be of a race or nationality that they aren't?

This is where we get back to those accents. Okay, modern actors mostly can do very good accents from different countries - but is it acceptable for them to do the equivalent of 'blacking up' in this way? There's an even stronger argument with red hair. As someone (formerly) with red hair, I'm well aware that it has ethnic origins. Yet actors with no appropriate ethnicity often dye their hair red in films. Is that acceptable?

Let's be clear. I'm not saying this as an apologist for white people playing black roles (or vice-versa). I don't think that's usually acceptable (there should surely be exceptions for parody etc.). But I genuinely ask, assuming that this isn't in the best possible taste, why it doesn't also apply to Americans playing Brits or brown-hairs playing redheads.

Friday, 29 January 2016

Another competitor for 'overblown science headline of the year'

Thanks to Ian Bald for pointing out the impressive headline 'The Death of Relativity Lurks in a Black Hole's Shadow' in Wired.

What's so impressive here is just how much it's possible to get wrong in a single headline. Black holes, of course, don't have 'shadows.' I think what they mean is its event horizon, though the article is so fuzzy it's difficult to be sure.

However, the real shocker is the apparent claim that general relativity is dead. Here's the thing. No it isn't. What the article actually says is that if a black hole's 'shadow' (event horizon?) isn't perfectly spherical or isn't just the right size for it's mass, then general relativity's predictions would be wrong. Well, duh. This would also be true if it were pink or singing the Stars and Stripes. Note however, that no one has discovered that its shape or size is different from prediction. (Or that it's pink.) They're just saying that we might be close to being able to make a measurement to see if it lives up to prediction. That's all.

Even if there is a disparity, as the article says 'If Einstein is wrong, general relativity won’t go away—it’s too good at what it does. It just won’t be the whole story anymore.' Right. And that fits with the headline how?

I appreciate editors want headlines that grab people's attention, but if they are going to deviate so far from the facts in order to do so, why not go the whole hog? I look forward to the headline on an article about a new extrasolar planet, where the story is that it's about the right size for life to read:


Why not? It makes as much sense.

Thursday, 28 January 2016

Hover cars, tri-vees and v-phones

I've just re-read a science fiction book I quite liked as a teenager. Called Prisoner of Fire, it's by the now largely forgotten British science fiction author Edmund Cooper. Back in the 70s he was quite popular and wrote a whole string of SF novels, but, to be honest, I can see why he's largely forgotten. The writing style seems from a different era, mannered and dated.

However, Cooper's ideas are still interesting. The topic of the book is the common enough SF trope of paranormals - it features a number of children with mental abilities, able to read minds, to block other telepaths, or to kill remotely. The reason it's interesting is that Cooper examines what such a capability would mean for governments, both in terms of protecting themselves and espionage, and how it could lead to an 'end justifies the means' attitude to the young telepaths as weapons.

That's not why I bring the book up now, though, so forgive the long introduction. Prisoner of Fire, written in the 1970s and set in the 1990s, wildly over-predicts technological advances. This was a common problem in the 60s and 70s. Things had moved on so much since the Second World War that it was assumed changes would be even more dramatic in the next 20-30 years. (Think how much 2001, A Space Odyssey from the late 60s overshoots what 2001 was like.) So Cooper merrily deployed hover cars, tri-vees and v-phones.

As I point out in Ten Billion Tomorrows, the problem with hover cars and tri-vees (in case you hadn't guessed, TVs that project a 3D image into empty space, a bit like the Princess Leia bit in the first Star Wars), the problem is that the authors weren't really thinking through the technological advances required. We just don't know how to practically make a car that floats or to project a hologram onto thin air. However, the v-phone is a more interesting one because we effectively have the technology but rarely use it.

I assume v-phones were video phones (it's never explicitly explained). These days, pretty well any smartphone or internet-connected computer is, in effect, a video phone. Using Skype or FaceTime, we can make video calls. And occasionally, for example, when family is on the other side of the world, they are very effective. But for 99% of our calling we don't use them. Because they feel strangely unnatural. The assumption has always been (video phones have been talked about since Edison and Tesla's day) that we would get the same benefits from a video call as a face-to-face conversation. And this can work with a sophisticated video conferencing setup. But for a chat on the phone it's a disaster.

There seem to be a number of reasons for this. One is that a smartphone is too up close and personal. It doesn't seem quite as bad on a computer where you can sit well back from the camera, because the view will take in a fair amount of the room. But on a smartphone video call, the other person's face pretty much fills the screen. To have an actual conversation with a person whose face fills your vision to that extent you would need to be around 10cm away from their face - a position that we just don't have conversations in, even with intimates, let alone strangers.

Another difficulty is focus. Although good listeners spend a lot of time looking at the person they are talking to, they also look away a fair amount, if only for fractions of a second. A solid focus on someone's face is intimidating. But in a video call, the other person is pretty much constantly looking straight at you.

Finally (I'm sure there are more issues, but these are the three that occurred to me), we don't always want the exposure that comes with being seen. A telephone is useful for many conversations precisely because we don't want to give too much away, whether it's because we're answering the phone in our pyjamas, because the room is a mess, or because it makes it easier to lie.

So poor old Edmund failed on all three, but the v-phones were arguably the most interesting fail because it's technology we can use, yet mostly choose not to.

Wednesday, 27 January 2016

Fascinating mangling of falsification

I have just read an article (don't ask me why - this is the wonder of Facebook) which tried to defend Mormonism from the worrying details of its origins. The piece included this:
Many intellectuals argue that “negative evidence” is supreme. To understand what they mean by this, consider the hypothesis that “all swans are white.” According to these intellectuals, it doesn’t matter how many white swans you find, you never really prove that “all” swans are white. However, as soon as you find one black swan, you have disproved the theory that “all swans are white.” They conclude that positive evidence doesn’t ever really prove anything, but negative evidence can. And it’s easy to see why they think that way. 
This is the approach that ex-Mormons have taken to their faith. In the face of unsettling information, they disregard all of the positive evidence because they think that a few points of negative evidence is sufficient to end the discussion. And given how logical the above reasoning seems to be, it is no wonder why. But they are still wrong. 
To understand why, consider another example. After first discovering the planet Uranus, astronomers attempted to predict its orbit by using Sir Isaac Newton’s laws of physics. They could observe the orbit of Uranus with their own eyes, but when they used Newton’s mathematical models to predict that orbit, they failed time and again. It made no sense. Newton’s laws had been right about so many things, but astronomers had found a case in which Newton’s laws did not work. So, was Newton wrong? Were his laws not quite as infallible as they had seemed? In light of this “negative evidence,” it would have been easy to conclude just that. 
However, years later, astronomers discovered another planet, Neptune. And as it turns out, when astronomers accounted for the mass of this newly discovered planet, Newton’s laws predicted the orbit of Uranus perfectly. So, as it turned out, it wasn’t that Newton’s laws of physics didn’t work. It was that they didn’t seem to work. And that’s because the astronomers simply didn’t have all the relevant information and context.
There's so much to get your teeth into here, but we'll pick out two key points. First there's the ad hominem attack. 'Many intellectuals argue... According to these intellectuals... and it's easy to see why they think this way.' Implication: intellectuals don't know what they are talking about. Don't listen to them. Note particularly 'According to these intellectuals, it doesn't matter how many white swans you find.' Forget 'According to intellectuals.' It's just true. It doesn't matter how many white swans you find. All swans are not white. Are they arguing otherwise?

However, no one suggests that falsification is usefully applicable to everything. Which is why it's odd that they then give an example where it isn't properly used. All scientific evidence is provisional. The black swan disproves the 'all swans are white' hypothesis, and that is the best data at the time and the only sensible viewpoint. But should it later prove that the 'black swan' was an unusual variant of goose and not a swan at all, the hypothesis could recover. However, the Newton example used in the extract from the article above fails on a number of counts.

First, the orbit of Uranus didn't show that 'Newton's laws of physics don't work' it showed that they didn't apply in that circumstance. There are plenty of other examples (Mercury's orbit, for instance) where they will never apply. As it happened, in the case of Uranus, it was because the astronomers didn't take into account the full situation. But there was nothing wrong with the assertion that Newton's law of gravitation didn't correctly describe the orbit of Uranus in the known solar system of the time. And until other factors were brought in, one possibility was that this was a case (like the orbit of Mercury) where Newton's law wasn't appropriate.

This argument is then used to suggest that yes, there are worrying aspects of the early history of Mormonism that cast its basis into doubt. Until you can show why that negative evidence is misleading - and that isn't happening - you can have all the positive evidence you like (which is what, exactly?) and the negative evidence still stands. Even in the Uranus example, the results showed their was something wrong with the astronomers' assumptions. Falsification remains a powerful tool, and a valuable one in cases like this.

Tuesday, 26 January 2016

The Sex Life of a Comedian - review

Written by stand-up comedian Dave Thompson, The Sex Life of a Comedian delivers some home truths about the life on the road. It's hard not to believe that the main character Doug Tucker's last minute arrivals at venues, or the way he spends hours crossing the country for underpaid gigs in unpleasant dives, don't have some inspiration in reality. As, I suspect, do the way that the various comedians who come into Tucker's life become huge successes or fail in ways that are little connected to their talent.

However, this is a novel, not a memoir, for which we can assume Thompson is thankful, because Doug Tucker's life is no bed of roses. Admittedly Doug seems to enjoy (with occasional regret and shame) the huge amount of (explicit) sex and drug taking he encounters, but there is a murky unpleasantness to his existence that shades into outright gangland violence. And Doug's luck rarely stays positive for long, while the car crash events that pull his life apart come with painful regularity.

To begin with, as we hear of Doug's extremely colourful sex life, it's hard not to think of this as a modern version of those 1970s 'Confessions of a Window Cleaner' type books. I never read one, but my suspicion is that they would have had the same, rather simplistic narrating style, with a series of sexual escapades (though I doubt if the content of these were as explicitly portrayed as are Doug's). But as the Tucker story develops, I was reminded much more of the books of the once extremely famous Leslie Thomas.

In part it was the period feel - the first part of the book is set in the nineties, but it feels more like the seventies - but mostly it was the similarity to Leslie Thomas's classic story arc of a likeable but weak-willed central character who is manipulated sexually and practically by unpleasant people to produce a trajectory that begins with a degree of success but that ends in a disastrous spiral of destruction. Like the best of Thomas's central characters, Doug Tucker has an element of an innocent, introduced to a dark world that seems first enticing and then destructive. And he has a hint of mystery about him with his rabid dislike of children and frequent reference to his mummy's knife.

I had been warned about the sex scenes, which are definitely not suitable for reading on the train if the person next to you glances at your book (as I experienced), but I found the rampant drug taking more disturbing, while the ending seemed rushed and not entirely satisfactory. I certainly wouldn't buy this if you are easily shocked or looking for a jolly romp, rather than a gut-wrenching story. However, by the time I was a quarter of the way in I had to discover Doug Tucker's fate, and it's a book that I won't forget in quite a while.

The Sex Life of a Comedian is available as an ebook on and, or as a paperback from

Monday, 25 January 2016

I am not a number

I've just read The End of Average for review, and I couldn't help letting out a little whoop of joy when it totally trashed psychometric testing.

I am talking about mechanisms like the Myers Briggs type profile, along with a whole host of rivals, all used by businesses in recruiting and team building to analyse a personality and assess how an individual will work with others. 

The problems I have always had with the approach are several-fold. It's based primarily on Jungian theory which has little scientific basis. Your personality type is self-determined, so, while it's not surprising it often feels right, that doesn't make it accurate. And I was always doubtful about the cultural norms of the mostly US-devised tests being applied worldwide. Infamously there used to be a question about whether you preferred a gun or something constructive (I can't remember what) - which clearly would have different resonance in the US and Europe. 

Now, though, there are much stronger grounds for concern. The End of Average points out that personality profiles don't reflect the behaviour of individuals, but rather they predict the average behaviour of a group of people, which isn't the same thing. If you are an ENTP like me, it doesn't say how you will behave, but how, on average, people with the same profile will behave. As the book says 'In fact, correlations between personality traits and behaviours that should be related - such as aggression and getting into fights, or extroversion and going to parties - are rarely stronger than 0.3.' The same applies to academic achievement and professional accomplishments. This means your personality traits, as identified by the test should reflect around 9 per cent of your actual behaviour, while getting over 90 per cent wrong.

Underlying this is the relatively recent (if entirely obvious) discovery that we don't have one personality/behaviour but it varies depending on the situation. A teenager, for instance, behaves very differently with a group of peers and with his or her grandmother. That's obvious. So why do we expect a single score on a handful of dimensions to reflect how we will behave in all circumstances? It's bizarre.

I don't expect companies to stop using these tests any time soon. Come on - some still use 'graphology', expecting handwriting to give insights into personality. But employers and academics should at least be thinking twice about what they are testing and why.