2402 00854 SymbolicAI: A framework for logic-based approaches combining generative models and solvers
He gave a talk at an AI workshop at Stanford comparing symbols to aether, one of science’s greatest mistakes. Limitations were discovered in using simple first-order logic to reason about dynamic domains. Problems were discovered both with regards to enumerating the preconditions for an action to succeed and in providing axioms for what did not change after an action was performed. Constraint solvers perform a more limited kind of inference than first-order logic.
However, given the aforementioned recent evolution of the neural/deep learning concept, the NSI field is now gaining more momentum than ever. This simple symbolic intervention drastically reduces the amount of data needed to train the AI by excluding certain choices from the get-go. “If the agent doesn’t need to encounter a bunch of bad states, then it needs less data,” says Fulton. While the project still isn’t ready for use outside the lab, Cox envisions a future in which cars with neurosymbolic AI could learn out in the real world, with the symbolic component acting as a bulwark against bad driving. Our NSQA achieves state-of-the-art accuracy on two prominent KBQA datasets without the need for end-to-end dataset-specific training. Due to the explicit formal use of reasoning, NSQA can also explain how the system arrived at an answer by precisely laying out the steps of reasoning.
IBM’s new AI outperforms competition in table entry search with question-answering
They do so by effectively reflecting the variations in the input data structures into variations in the structure of the neural model itself, constrained by some shared parameterization (symmetry) scheme reflecting the respective model prior. From a more practical perspective, a number of successful NSI works then utilized various forms of propositionalisation (and “tensorization”) to turn the relational problems into the convenient numeric representations to begin with [24]. However, there is a principled issue with such approaches based on fixed-size numeric vector (or tensor) representations in that these are inherently insufficient to capture the unbound structures of relational logic reasoning. Consequently, all these methods are merely approximations of the true underlying relational semantics.
Critiques from outside of the field were primarily from philosophers, on intellectual grounds, but also from funding agencies, especially during the two AI winters. A more flexible kind of problem-solving occurs when reasoning about what to do next occurs, rather than simply choosing one of the available actions. This kind of meta-level reasoning is used in Soar and in the BB1 blackboard architecture.
Symbolic AI
One of the keys to symbolic AI’s success is the way it functions within a rules-based environment. Typical AI models tend to drift from their original intent as new data influences changes in the algorithm. Scagliarini says the rules of symbolic AI resist drift, so models can be created much faster and with far less data to begin with, and then require less retraining once they enter production environments. In contrast, a multi-agent system consists of multiple agents that communicate amongst themselves with some inter-agent communication language such as Knowledge Query and Manipulation Language (KQML). Advantages of multi-agent systems include the ability to divide work among the agents and to increase fault tolerance when agents are lost.
One of their projects involves technology that could be used for self-driving cars. The AI for such cars typically involves a deep neural network that is trained to recognize objects in its environment and take the appropriate action; the deep net is penalized when it does something wrong during training, such as bumping into a pedestrian (in a simulation, of course). “In order to learn not to do bad stuff, it has to do the bad stuff, experience that the stuff was bad, and then figure out, 30 steps before it did the bad thing, how to prevent putting itself in that position,” says MIT-IBM Watson AI Lab team member Nathan Fulton. Consequently, learning to drive safely requires enormous amounts of training data, and the AI cannot be trained out in the real world.
Symbolic artificial intelligence
MIT researchers have developed a new artificial intelligence programming language that can assess the fairness of algorithms more exactly, and more quickly, than available alternatives. For instance, one prominent idea was to encode the (possibly infinite) interpretation structures of a logic program by (vectors of) real numbers and represent the relational inference as a (black-box) mapping between these, based on the universal approximation theorem. However, this assumes the unbound relational information to be hidden in the unbound decimal fractions of the underlying real numbers, which is naturally completely impractical for any gradient-based learning. However, the black-box nature of classic neural models, with most confirmations on their learning abilities being done empirically rather than analytically, renders some direct integration with the symbolic systems, possibly providing the missing capabilities, rather complicated. Driven heavily by the empirical success, DL then largely moved away from the original biological brain-inspired models of perceptual intelligence to “whatever works in practice” kind of engineering approach. In essence, the concept evolved into a very generic methodology of using gradient descent to optimize parameters of almost arbitrary nested functions, for which many like to rebrand the field yet again as differentiable programming.
Kahneman describes human thinking as having two components, System 1 and System 2. System 1 is the kind used for pattern recognition while System 2 is far better suited for planning, deduction, and deliberative thinking. In this view, deep learning best models the first kind of thinking while symbolic reasoning best models the second kind and both are needed. The hybrid artificial intelligence learned to play a variant of the game Battleship, in which the player tries to locate hidden “ships” on a game board. In this version, each turn the AI can either reveal one square on the board (which will be either a colored ship or gray water) or ask any question about the board. The hybrid AI learned to ask useful questions, another task that’s very difficult for deep neural networks.
Natural Language Processing
Monotonic basically means one direction; i.e. when one thing goes up, another thing goes up. The thing symbolic processing can do is provide formal guarantees that a hypothesis is correct. This could prove important when the revenue of the business is on the line and companies need a way of proving the model will behave in a way that can be predicted by humans. In contrast, a neural network may be right most of the time, but when it’s wrong, it’s not always apparent what factors caused it to generate a bad answer. Hadayat Seddiqi, director of machine learning at InCloudCounsel, a legal technology company, said the time is right for developing a neuro-symbolic learning approach. “Deep learning in its present state cannot learn logical rules, since its strength comes from analyzing correlations in the data,” he said.
- Limitations were discovered in using simple first-order logic to reason about dynamic domains.
- “This grammar can generate all the questions people ask and also infinitely many other questions,” says Lake.
- Other ways of handling more open-ended domains included probabilistic reasoning systems and machine learning to learn new concepts and rules.
- First of all, every deep neural net trained by supervised learning combines deep learning and symbolic manipulation, at least in a rudimentary sense.
- This could prove important when the revenue of the business is on the line and companies need a way of proving the model will behave in a way that can be predicted by humans.
During the first AI summer, many people thought that machine intelligence could be achieved in just a few years. By the mid-1960s neither useful natural language translation systems nor autonomous tanks had been created, and a dramatic backlash set in. However, in the meantime, a new stream of neural architectures based on dynamic computational graphs became popular in modern deep learning to tackle structured data in the (non-propositional) form of various sequences, sets, and trees.
Projects investigating Swahili, global media win SHASS Humanities Awards
Research problems include how agents reach consensus, distributed problem solving, multi-agent learning, multi-agent planning, and distributed constraint optimization. Multiple different approaches to represent knowledge and then reason with those representations have been investigated. Below is a quick overview of approaches to knowledge representation and automated reasoning.
Similar to the problems in handling dynamic domains, common-sense reasoning is also difficult to capture in formal reasoning. Examples of common-sense reasoning include implicit reasoning about how people think or general knowledge of day-to-day events, objects, and living creatures. It has now been argued by many that a combination of deep learning with the high-level reasoning capabilities present in the symbolic, logic-based approaches is necessary to progress towards more general AI systems [9,11,12].
AI programming languages
Cognitive architectures such as ACT-R may have additional capabilities, such as the ability to compile frequently used knowledge into higher-level chunks. Programs were themselves data structures that other programs could operate on, allowing the easy definition of higher-level languages. Given the increase in demand for CPUs and GPUs due to increased use of artificial intelligence, it would symbolic ai not be surprising to see AMD’s revenue continue expanding. From 2013 to 2022, AMD’s operating income increased from $89 million to $1.3 billion. “You can check which module didn’t work properly and needs to be corrected,” says team member Pushmeet Kohli of Google DeepMind in London. For example, debuggers can inspect the knowledge base or processed question and see what the AI is doing.
The Future is Neuro-Symbolic: How AI Reasoning is Evolving – Towards Data Science
The Future is Neuro-Symbolic: How AI Reasoning is Evolving.
Posted: Tue, 23 Jan 2024 08:00:00 GMT [source]