AI is everywhere today (or at least mentions of it would have you believe so). I can't avoid it, so I might as well learn about it. (If you can't beat it, at least join in and develop it before it makes you redundant, right? That's my thinking, anyway.) It seems to be incorporated into just about every technology that affects one's daily life: Antisocial media, self-driving cars/driver-assist, drones, online shopping ...
"We are phantoms of the future,
Ordinaries from the super but we are high.
Sing your anthems to your rulers.
Disconnect from their medullas and be alive.
We are transients from the sewers,
But please treat us like we're friends when we arrive.
You can kill or just destroy them.
Here's your pistol and your poison, please decide
(Please decide)."
— Japanese Cartoon; Heirplanes; In the Jaws of the Lords of Death
If you're like me, you find this concerning. For all I know (which is frankly very little on this particular topic), AI could well be the "killer app" that puts an end to the human race (which isn't necessarily a bad thing in the grand scheme of things, in my opinion). I do, however, want to know the details of how and when that might happen. (I also think it would be better to have some understanding of the technology than to remain blissfully ignorant/oblivious. I'm of the opinion that it's better to know the details and be prepared, no matter how awful they may be, and try to have a hand on the controls than to be caught off guard.) Hell, my concerns and suspicions about it might even be proven wrong, which would be great.
Fortunately (from my perspective, anyway), AI is still pretty bad at doing competently a lot of things that most humans do fairly well (like write decent poetry, for instance). It will likely take a while for it to clear various hurdles to taking over the world. We still have a chance to escape it, if not outright stop it. For the time being (at least at time of writing), humans are still important (or at least more capable and effective at performing certain tasks than supposedly smart machines).
Hopefully, this will be your starting point for learning about AI, what it needs to work, why it has failed in the past and how to keep it at bay. My aims in learning about AI and writing this are chiefly these:
- Have an accurate idea of the pros and cons of using it (or not)
- Educate (and possibly warn) people about the above
- Expand my skill set and stay relevant in software development
- Hopefully write an AI application/system that automates a lot of the tedious parts of my job, so that I can focus on being constructive.
For any technology to survive, it must be useful. For a technology to be useful, it must have a domain of problems that it can solve more effectively and efficiently than the alternative, which is not using it. This is true of software and computing in general and AI is a particular specialised case. In the past, AI didn't achieve this for various reasons. One of these was that it was ahead of its time, not having the underlying hardware (and possibly other technology) capable of supporting it. To the layman, AI is one big and generalised thing. However, it is actually an umbrella term for a domain of applications involving a number of problem-solving functions and technologies. These are chiefly process automation, human-computer interaction (HCI), data analysis, machine learning (ML) and deep learning. Each of these interconnected and interrelated functions/technologies is a big topic on its own (which can't possibly be covered in adequate detail here) and warrants further investigation. This text/series of posts is definitely not the be-all-and-end-all of writings on AI. At best, it provides a brief overview, with more to follow for the areas in which I'm interested (data analysis, deep learning, machine learning and process automation).
With that out of the way, it's time to discover AI, to see what it can do not just to us, but also for us ...
What I aim to initially cover in this exploring this topic, through a number of posts (depending on initial interest and length):
- What can AI actually do for you?
- How data affects the use of AI
- Understand how AI relies on algorithms (program logic) to perform useful work
- How making use of specialised hardware enables AI to improve performance
What I (probably) will not cover is the question of whether androids dream of electric sheep, and other ruminations of a philosophic nature. I doubt that AI knows. Yes, there will be jokes, but you don't have to laugh in the face of the coming apocalypse.
Introducing AI: Hello, Computer?
Star Trek TNG
The beginning of AI has not been without its false starts and missteps, many of which can be attributed to a lack of understanding regarding what it is and what it can/should do. Popular media (books, movies and TV) have created hype (including anthropomorphism) that doesn't exactly help with that. So, let's clear that up with definitions of what AI actually is and what it isn't, as well as how it relates to computers (and computing) today.
Hasta la vista, baby!
Everyone sees AI differently (and therefore has different expectations and goals). Although my view is undoubtedly negative, my aim is to approach the topic from as factual and informative a perspective as I reasonably can, without buying into hype. You might well have a different viewpoint and not get much benefit from this text. That's fine, as long as you're not expecting AI to deliver something that it can't (at least not yet, anyway, if ever).
Defining AI
Defining the meaning of a term is especially important with technologies (or groups thereof) that have received more than a little press coverage at various times and in a number of ways. Stating that AI is "artificial intelligence" isn't actually helpful or meaningful to understanding it. Sure, the intelligence (which is ambiguous at best) doesn't come from a natural/biological source, but so what?
Determining Intelligence
Intelligence (particularly that of a non-primate nature) is hard to accurately and adequately define and quantify. Consequently, there are a number of different definitions, but we likely all agree that it comprises a number of different mental activities:
- Learning: Obtaining and processing new data/information. (I would argue that learning is more than this, that it depends on the next two activities, but then rote learning is a case in point for an argument against that.)
- Reasoning: Being able to manipulate data/information in various ways, such as following logic.
- Understanding: Considering the results of information manipulation.
- Grasping truths: Determining the validity of the manipulated information.
- Seeing relationships and patterns: Determining how valid data interacts with (or is correlated to) other data.
- Considering meanings: Applying truths to particular situations in a manner consistent with their relationship(s).
- Separating fact from belief: Determining whether the data is adequately supported by provable sources that can be demonstrated to be consistently valid.
The apparent inability to do a number of these things (mainly the first two), by adherents of political factions with extremist viewpoints, is what leads me to question their intelligence.
The list above could easily get quite long. Even this list, short as it is, is susceptible to being interpreted in different ways by different people who accept it as valid. One thing that should be obvious is that this list details a process that can be followed by a computer application/system:
- Set a goal based on needs or wants (desired outcome).
- Gather data, assessing the value thereof in support of achieving the goal.
- Gather additional information that could support the goal.
- Manipulate/process the data so that it becomes consistent with existing information.
- Define the relationships and truth values between existing and new information.
- Determine whether the goal has been achieved.
- Modify the goal in light of the new data and its effect on the probability of success. (This is optional.)
- Repeat the second step onward as required, until the goal is achieved or the possibilities for achieving it are exhausted.
It is worth remembering that, although it is possible to create algorithms and provide them data in support of achieving certain goals, the capabilities of a computer to achieve some level of intelligence are quite considerably limited. At present, computers are unable to understand much, if anything. This is due to the fact that their functioning is strictly bound to processing and manipulating data in a purely mathematical/mechanical way. Computers are very poor at separating truth from falsehood (as the algorithms used by antisocial media Websites attest). Really, computers cannot fully implement/perform all of the activities involved in what we consider full intelligence. This might change in future, but it is unlikely to while computer operations still work on a purely/mostly mathematical model.
Humans make use of multiple types of intelligence to perform a plethora of tasks. Knowing them (and being able to categorise them) can prove useful in determining if a computer application/system can reliably replicate/simulate them. The list below is based on Howard Gardner's categorisation of intelligence, with slight modification:
- Visual-Spacial Intelligence: This has moderate potential for simulation. It makes use of charts, drawings, graphics, photos, multimedia and visual models. While robots and other technologies can make use of computer vision and similar capabilities, it is often difficult to simulate with any degree of accuracy/precision and proficiency.
- Bodily-kinesthetic: This has moderate to high potential for simulation. Body movements (such as dancing or performing surgery) require both awareness of the position of the body (including proprioception) and precision. This kind of intelligence is often leveraged by robots that are used to automate repetitive tasks that require high precision but not much grace. Do not conflate body augmentation (assistive technology that enhances existing ability) and genuine independent movement. The former is nothing more than leveraging mathematics to enhance an existing ability, while still being dependent on the person whom possesses it.
- Creative: This has very low (almost no) potential for simulation, if you're doing this from scratch and not using technology developed by the likes of Microsoft and other industry heavy-hitters. That's because artistic output (drawings, musical compositions, paintings, sketches, etc.) require imagination, invention and new patterns of thought that result in new output. While AI can appear to simulate certain patterns of thought or styles, can even combine them to create seemingly new/unique presentations, they are ultimately mathematical permutations of existing patterns. True creativity requires self-awareness and intrapersonal intelligence, which computers just don't have (and hopefully never will). Computers really aren't suited to creating unique output, but more to repeating a process that results in the same (or similar) predictable output for the same input.
- Interpersonal: This has low to moderate potential for simulation. The domain for this intelligence involves textual chat apps, telephone conversation, audio/video conferencing, writing and email. Interacting with others occurs at more than one level. It is intended to obtain, exchange and evaluate/manipulate information based on the experiences of others. Using natural language processing (NLP) and/or linguistic analysis coupled with predefined answers to certain questions, computers can both ask and answer basic questions. However, this is not because of any level of understanding natural human language. It occurs as a result of finding and matching keywords (including by cross-referencing), then relaying information based on them. This is a case of logical intelligence (which computers do well), not interpersonal intelligence (which they cant do).
- Intrapersonal: This has no potential for simulation. Currently, looking inward to understand one's own interests and motivations in order to set goals based on them is exclusively a human-only kind of intelligence. (It's a considerably neglected one at that.) Machines have no desires, interests, wants, or creative abilities. Applications and algorithms process data without self-awareness or understanding of their operations. They might be able to mimic introspection, but it will be shallow/superficial at best.
- Linguistic: This has low to moderate potential for simulation. This domain is concerned with books, games, speech and various forms of multimedia. Being able to communicate in words is essential for exchanging ideas and information in an easy form. While computers rely on keyword matching to answer questions, they often struggle to parse natural language (particularly spoken, as many a speech-to-text system used for auto-captioning shows) for extracting keywords. Consequentially, they cannot determine what is being asked of them and the answers they provide make no sense. (Essentially, garbage input or processing results in garbage output.) Since humans process written and spoken information in different areas/centres of the brain, written linguistic intelligence doesn't necessarily correlate with high verbal linguistic intelligence. Currently, computers don't distinguish between/separate verbal and written linguistics, which may be why they struggle (even with extensive training).
- Logical-mathematical: Since computers are primarily logic-based mathematical machines (the very word "computer" essentially means "calculator" or "reckoning machine"), it stands to reason that the potential for successfully simulating this type of intelligence is high. AI is highly accomplished at solving logic games/puzzles and brain teasers (such as chess, go and Sudoku). Calculations, comparisons, exploring patterns and relationships are tasks at which computers excel, since that is the primary purpose for which they are built. This is the only real intelligence they possess, although they might have minimal amounts of the other six.
Please note that while there are various platforms and technologies that explore the realms listed as "low", the above list is aimed at the average small development team with limited resources, not mega-corporations like Google and Microsoft. It is also important to remember that AI development is a rapidly changing landscape. What was relevant and true a few years ago may no longer be so now (and the same goes for the future).
The statements in the above table standing, basing an assessment of computer-vs-human intelligence on only one area isn't a good idea, however. We need additional ways to define AI. However, since this section is already quite long and I have other things to do beside write a daily post (which I'm not going to entrust to an AI), I'm going to leave it there for today.
Thumbnail: Star Wars BB8 replica toy by Sphero