Knowledge is the foundation of intelligence— whether artificial intelligence or conventional human intellect. The understanding implicit in intelligence, its application towards business problems or personal ones, requires knowledge of these problems (and potential solutions) to effectively overcome them.

The knowledge underpinning AI has traditionally come from two distinct methods: statistical reasoning, or machine learning, and symbolic reasoning based on rules and logic. The former approach learns by correlating inputs with outputs for increasingly progressive pattern identification; the latter approach uses expert, human-crafted rules to apply to particular real-world domains.

RELATED CONTENT: Ethical design — What is it and why developers should care

True or practical AI relies on both approaches. They supplement one another for increasingly higher intelligence and performance levels. Enterprise knowledge graphs— domain knowledge repositories containing ideal machine learning training data—furnish the knowledge base for maximum productivity of total AI.

Symbolic reasoning
Knowledge graphs are at the basis of symbolic reasoning systems using expert rules for real-life business problems. Regardless of the particular domain, data source, data format, or use case, they seamlessly align data of any variation according to uniform standards focused on relationships between nodes. Semantic rules and inferencing create new types of understanding about business knowledge that machine learning couldn’t generate at all. Examples include optimizing the array of sensor data found in smart cities for event planning based on factors such as traffic patterns, weather conditions, previous event outcomes, and preferences of the hosts and their constituencies. Symbolic reasoning also has the advantage that in the end, one still can explain how and why certain new knowledge and new suggestions were generated.

Statistical feedback loop
Despite what has been said in the previous paragraphs, knowledge graphs can greatly benefit from machine learning and add value to the symbolic rules-based systems. When modeling car driving behavior, for example, modern image recognition systems (relying on deep learning) can produce more realistic models when deployed in conjunction with rules. Nonetheless, the general paradigm by which machine learning complements rules-based AI is by creating a feedback mechanism for improving the latter’s outcomes—and enhancing the knowledge of semantic graphs.

In the preceding smart city use case organizations can deploy machine learning to the outcome of rules-based systems, especially when those outcomes are measured in terms of KPIs. These metrics can assess, for example, the success of the event as measured by the enjoyment of the attendees, the subjective costs and the real costs to the municipality, these costs for the organizations involved in the event, the rate of attendance, etc. Machine learning algorithms can analyze those KPIs for predictions to improve future events.

Horizontal applicability
The interplay of the knowledge graph foundation with both the statistical and symbolic reasoning form of AI is critical for several reasons. Firstly, they all augment each other. The graphs provide the knowledge for rules-based systems and optimize machine learning training data. The machine learning feedback mechanism improves the graph’s knowledge and the rules, while the output of rules-based systems provides knowledge upon which to run machine learning. Secondly, this process is applicable to any number of horizontal use cases across industries. Most of all, however, there are amazingly advanced applications of AI empowered by this combination, the likes of which makes simple automation seem mundane.

There’s risk management use cases in law enforcement and national security in which one can observe terrorists, for example, integrate that information and create hypothetical events or scenarios based on probability (determined by machine learning). Rules-based systems for security measures, then, are transformed into probabilistic rules-based systems that unveil the likelihood of events occurring and how best to mitigate them. Similar processes apply to many other instances of risk management.