Skip to content

Are LLMs capable of non-verbal reasoning?

    Large language models have thus far achieved great success by using their transformer architecture to effectively predict the next words (i.e. language tokens) needed to respond to queries. However, when it comes to complex reasoning tasks that require abstract logic, some researchers have found that interpreting everything through this kind of 'language space' can cause problems, even for modern 'reasoning models'.

    Now researchers are trying to get around these problems by creating models that can work out potential logical solutions entirely in 'latent space' – the hidden layer of computation just before the transformer generates language. While this approach does not make a major change in an LLM's reasoning ability, it does show marked improvements in accuracy on certain types of logic problems and shows some interesting directions for new research.

    Wait, what room?

    Modern reasoning models such as ChatGPT's o1 tend to work by generating a 'chain of thoughts'. Each step of the logical process in these models is expressed as a series of natural language word tokens that are fed back by the model.

    Read the full article

    Comments