So Google's AI has won the final game in the Go challenge against a master player - making it 4 games out of 5. If you are unfamiliar with Go or with the challenge then you might want to read the BBC story about the victory.
This has prompted a new spate of humans-doomed-by-AI stories. However, they seem to be missing a fundamental point - that the AI in question is being successfully applied to the things that people are naturally bad at.
When applying any technology to automate a human activity, be it physical or mental, it makes total sense to look at activities that human beings struggle with. After all, there has to be a commercial aspect to the application, even if to start with only in principle. Nobody would argue that mechanical excavators are a threat to a species with a finite capacity for digging, rather they are seen as a boon - so what is it with AI and games?
Games, by definition, need to be fun and an important part of that comes from the challenge they represent - they need to be hard and require some effort to learn, to play or to master. Go is a game that has a perfectly logical basis but a vast number of permutations. Part of the challenge to a human player is to persuade that three pound organ of general intelligence in our heads to apply itself to that domain. This is not easy - it is not a task for which our brains were optimised by our evolutionary past; which is what makes it fun.
However, those same brains can determine what are more optimal strategies and while we cannot change our brains to apply them we can embed them in machines that do - machines dedicated to that domain. So we can create computers that can beat Go Grand Masters - or even the best human players at "Jeopardy".
The mistake, is to assume that because we can do something as humans and we find it hard, that means it must be fundamentally hard to do - or alternatively, that because we find something easy, that it must be easy to do.
I'm old enough to remember the early days of AI when it was assumed it would take a long time to develop a human-beating chess computer but a short time to develop software that could understand speech. That assumption could not have been more wrong. Chess turned out to be a trivial AI problem, while speech recognition was fiendishly difficult.
So what does this mean for the Google AI Go victory and why is this article on a construction related blog? Well, the point is that AI is getting better and better at doing the kinds of things that people find fundamentally hard to do - and as it happens the construction industry contains a great many of those things.
The way in which we currently estimate, plan, schedule, manage and design has many aspects that require human beings to make decisions that they are fundamentally bad at. Decisions where people are apt to overlook things or make mistakes or simply reach non-optimised solutions.
The construction industry has made great strides in mechanising (and even automating) its physical processes and it has certainly applied computer technology to marshalling ever increasing amounts of data - but as yet the use of AI to help the decision making process is all but absent.
The application of domain-specific AI to construction could lead to as many benefits as mechanisation. It could free people up to do what people are good at and lead to major improvements in quality, efficiency, safety and most importantly for all of us working in the industry, job satisfaction. The concept of "big data" is just starting to touch on that potential but the application of the kind of AI involved in the Go challenge could and probably will be revolutionary.
So if the current AI developments are a boon, when should we start to worry? Probably when it becomes artificial general intelligence and starts to be better than us at things that human beings find easy to do. Such as understanding the meaning of something within an arbitrary context - or even convincingly demonstrating an understanding of what meaning actually is.