KNIME logo
Contact usDownload
Read time: 5 min

Complex Technology vs AI: What's the Difference?

March 31, 2022
ML 201 & AI
adam-nowakowski-D4LDw5eXhgg-unsplash.jpg
Stacked TrianglesPanel BG

As first published in techopedia

Often, artificial intelligence (AI) is used broadly to describe all types of systems that seem to make decisions we do not quite understand. But while many reasonably complex systems make decisions like this, it does not immediately make them “intelligent.”

For example, I might not understand how my “smart” oven thermometer seems to know when my roast beef will be perfectly done, or how my garden light knows when to turn on, but the engineers putting together the (not-too-complex) mathematical equation do.

There are many other systems that, at first glance, look intelligent—but they are just constructed by smart people. We should not label these as “intelligent” because that suggests they are making their own decisions instead of simply following a human-designed path.

So, what actually counts as artificial intelligence?

A better way to distinguish (artificially) intelligent systems from those that just follow human-made rules is to look for the person who can explain the systems' inner workings (i.e., the person ultimately responsible for what the systems do).

If a classic system fails, we can go back to the board and check the mechanical workings or program code. We can identify the part that was badly engineered or wrongly programmed; and if those processes are well-governed, we can also identify the team responsible for this part of the system. They did write those lines of code alerting me to take out the roast beef or turn on the lights in the garden—but for humans; not for my neighbor’s cat.

If a system does not work in all cases (e.g., different cuts of meat or wild animals in other parts of the world), they can further optimize those equations to make them applicable for those previously unforeseen cases.

Who is responsible for AI?

In other words, who is at fault when a robot picks up an egg and applies too much pressure?

This is one of the classic examples in which an artificially intelligent system has been trained with many examples of objects and how to pick them up. The system has learned some internal representation of what to do—maybe using a prominent AI learning technique called reinforcement learning, in which the human trainer has provided feedback about the outcome of those activities.

In this case, when the robot picks up an egg and crushes it, we know we may have done the wrong training (perhaps only having the robot pick up tennis balls and baseballs). So, here, we can blame the human who selected the training material; but then we are really applying the intelligent system to something it has no clue about.

If no such obvious external mistake was made, we cannot really debug the system further and find out who is at fault. In the end, the system extracted knowledge from its past training activities and applied this knowledge to a new situation.

This is neatly analogous to humans trying to explain complex decisions themselves: You will never get the full explanation, which is why early decision systems aiming to extract all knowledge from human experts and putting it in a rules-based form failed so miserably. And none of the explainable AI techniques will succeed there either. Surely, for simple systems, we can extract an understandable representation of what was learned—but for truly interesting systems making complex decisions, we cannot.

How can we predict AI outcomes?

All we can aim for, really, are simple approximations—which usually hide the interesting little details—or play “what if” games such as, “What would you decide if this aspect was different?” In these cases, AI is better than humans in one way: It will continue to give the same answer. Humans tend to change their decisions over time—or just because you asked them one too many times.

If an AI system's outcome is undesired—like if a robot picks up an egg and crushes it—we may be able to blame the learning system's programmers. However, we can only blame them if they promise their system would learn how to pick up eggs from tennis balls and baseball training examples only.

In this example, it is much more likely that programmers applied a more generic training procedure that was not designed specifically to teach the robot how to pick up eggs. So the fault lies with whoever picked the training method and the training examples—not the designer of the underlying system itself.

Is it possible to ensure an artificial intelligence system is bias-free?

A robot picking up an egg and crushing it isn't exactly a grave problem—because the badly chosen training examples are obvious (and who doesn’t like scrambled eggs?). However, it becomes an issue when the connection between training examples and future problems with the learned system are less obvious.

Our training set above is clearly ignoring eggs. One could claim, then, we have trained a biased system that discriminates against eggs by using a training example that excludes eggs. And that is where more tangible issues rear their heads: How do I ensure my customer training data is not directly biased against gender, birth place, age and education—or worse, indirectly biased against these variables through some other properties in the data that allow my learning system to infer some of those discriminatory attributes? (Also read: You Can't Eliminate Bias from Machine Learning But You Can Pick Your Bias)

Ensuring an AI system is bias-free is still an ongoing area of very active research; and this type of problem can be seen occurring in other areas of our daily lives. Recent news on autonomous vehicles misinterpreting street situations falls into the same category. It is impossible to ensure training examples cover all possible scenarios to guarantee the car does not make the wrong choice in a rare and unforeseen situation.

In fact, it is almost ironic that safety-critical AI systems are often safeguarded by classic, human-derived rules to avoid catastrophic decisions in borderline cases.

Conclusion

An intelligent artificial system draws conclusions or extracts knowledge from observations and puts that internal representation to work to make new decisions. And crucially, that internal representation was not created by a human—but by the system itself.

Moreover, in truly intelligent systems, a human was only indirectly involved by devising the system that creates the internal representation upon which the AI system draws to make decisions; and a human has only indirectly influenced this by deciding which learning system and training examples were used.