It has been a number of months since I’ve started off one of my articles with a quote. Today – I go back to my roots with the following quote:
“I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.”
I’m curious as to which of my audience recognizes this quote. I’m not sure if the most recognition would be from those of a certain generation – or perhaps those who appreciate particular genres of entertainment.
Regardless, this quote is from a movie – too be specific “2001 – A Space Odyssey” and it is muttered by “no-one” other than HAL-9000 – the onboard computer.
For the past few weeks, I’ve been pondering what to write this month. I knew that I wanted to write about “Artificial Intelligence” for lack of a better word. Once I read this quote, one thing immediately came to mind.
I think that there is a grandiose notion that eventually technology will evolve to the extent where it will become a form of a sentient being. Perhaps one day, technology will evolve (evolve?) to such an extent, but I don’t think that this will be in our lifetime, perhaps not even in the lifetime of our children, either.
This may be heresy to suggest, and there’s a good chance that I could be totally wrong. As much as technology has evolved over the last 20 years, I certainly don’t see any semblance of it pushing the envelope in terms of this level of sophistication.
Getting back to the quote from Hal – I’m most interested in the term “conscious entity”. I think that most people would consider that to be a relatively decent definition of “life”. Merriam-Webster gives a couple of interesting definitions:
1 : perceiving, apprehending, or noticing with a degree of controlled thought or observation
2: capable of or marked by thought, will, design, or perception
Can technology – in any form be considered of being self-aware? I don’t think so – not through any evidence that I have ever heard of or seen. I think of another movie from many years ago, “Demon Seed”. In this movie, Proteus (the computer) is clearly self-aware and does whatever is in its power to prevent being shut down.
If this is indeed one example of how a computer could be considered to be self-aware (and therefore be considered “conscious”), then I think that it’s quite apparent that we are many decades away from developing this type of sophistication.
If it is theoretically possible to build a machine that is self-aware, then I think that the first fundamental problem is that the technology as we know it is nowhere near sophisticated enough, quick enough or has sufficient capacity to host and run a series of programs that would define the computers “soul”.
Let’s take the last problem – that of capacity. How much data could the human mind hold. The answer itself is actually the whole root of the problem. The answer is that no-one really knows how much information the human brain can hold. One researcher from Syracuse University has speculated that the human mind can hold perhaps in the range of 500-1000 terabytes. One terabyte is 1,000 gigabytes, so if I consider that this notebook that I’m using has a 500gb hard disk, then the mind holds more than 1,000 times the capacity that I am working with now.
However, this leads to an interesting point, this memory requirement is what is required to store as much data as is in my mind. This does not take into consideration the extraordinarily complex program code that would need to be in the computer to process and analyze this data. Let’s say for the sake of argument that we’d need an addition 500-1000tb for the program itself.
The biggest argument against the possibility of there being sentient beings in anything but the distant future is that there is so much that we don’t understand about the human mind. There are numerous mysteries as it involves how data is processed, how decisions are made, how judgements are evaluated and how to analyze goals.
It doesn’t matter how good of a software designer or developer I am, if I don’t have a thorough and intimate knowledge of exactly how the mind works, then I can’t be expected to be able to write software that contains these skills.
While your arguments are not without much merit, I don't believe you are taking in consideration the concept of a technological singularity effecting the development of Ai.
ReplyDeleteFor instance lets say we never figure the brain our or figure out the true nature of the soul, regardless of this our technology will continue to grow in complexity.
It you take a dumb non sentient machine and automated it's potential responses out into a level of complexity near or greater than human, you would create a machine that "appears" to be intelligent from the perspective of a human mind.
If true Ai is not discovered the singularity alone will create minds that will appear Ai like to humans.
As always - thank you for your views. In one respect, I kind of see your point, but I think that I would respectfully disagree.
ReplyDeleteIn my own very personal opinion, a machine that appears to be intelligent from the perspective of a human mind, probably wouldn't qualify as artifically intelligent - by my definition anyways.
I think back to some programs in the past that have been written which "simulate" AI. These programs "converse" with the user, and from all appearances, seem to have conversations with the user. However, more often than not, this is really smoke and mirrors with the foundation of specific (and very sophisticated!) program code which anticipates possible questions and response, and uses branching logic to frame it's own response.
Interesting thoughts - thank you for your contribution - and feel free to tell me that I'm talking smack - if I am :)
- Rick