Experiment with neurosymbolic reasoning: LLM as a parser plus a symbolic reasoner: 2024

Allikas: Lambda

Your main task is to experiment with semantic parsing with an LLM (GPT from openai or microsoft, Llama, whichever) to obtain symbolic rules and facts, which are then fed to a symbolic solver to solve the problem. The secondary task is to find examples where a straightforward question from GPT does not give a correct answer, but the process above does. This may not be trivial: in case you fail, you still pass the lab.

Note: this lab can be done via a browser, no strict need to install anything.

How to proceed:

First, have a look at the gpt subfolder in the nlpsolver repo. The README.md file explains what is there. Use the logifyprompt3.txt as an example or a source of ideas of how to make an LLM to parse English to logic.

Second, modify the example prompt (or write your own) to produce input for either logictools or online problog. If you choose problog (which I recommend for this lab, since it will be more interesting and challenging), make the LLM parser prompt so that it can - at least for some cases - include (estimated) numeric confidences in the problog rules / facts it generates.

You do not need to use the gpt.py program in the repo: perfectly OK to use a browser interface to LLM or some other LLM, not GPT.

When you get sensible output from the LLM parse process, use logictools or problog to find the answer. Once you can make that work, please create several example rule / fact / question sets.

Then try to find examples where a straightforward question from GPT does not give a correct answer, but the neurosymbolic approach does. Do not worry too much if you fail.

Finally prepare a small presentation of what you did, what worked out and what not.