Fine-Tuning Large Language Models as Turing Machines for Arithmetic Execution

This paper explores the potential of Large Language Models (LLMs) in executing arithmetic tasks by fine-tuning them to function as Turing machines, highlighting their impressive capabilities in natural language processing.

Fine-Tuning Large Language Models as Turing Machines for Arithmetic Execution
AI Papers Decoded Podcast
11 views β€’ Oct 12, 2024
Fine-Tuning Large Language Models as Turing Machines for Arithmetic Execution

About this video

https://arxiv.org/pdf/2410.07896

Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide range of natural language processing and reasoning tasks. However, their performance in the foundational domain of arithmetic remains unsatisfactory. When dealing with arithmetic tasks, LLMs often memorize specific examples rather than learning the underlying computational logic, limiting their ability to generalize to new problems. In this paper, we propose a Composable Arithmetic Execution Framework (CAEF) that enables LLMs to learn to execute step-by-step computations by emulating Turing Machines, thereby gaining a genuine understanding of computational logic. Moreover, the proposed framework is highly scalable, allowing composing learned operators to significantly reduce the difficulty of learning complex operators. In our evaluation, CAEF achieves nearly 100% accuracy across seven common mathematical operations on the LLaMA 3.1-8B model, effectively supporting computations involving operands with up to 100 digits, a level where GPT-4o falls short noticeably in some settings.

Video Information

Views

11

Likes

2

Duration

9:09

Published

Oct 12, 2024

Related Trending Topics

LIVE TRENDS

Related trending topics. Click any trend to explore more videos.