Articles

Focusing On Fine-Tuning

Ohm, Paul

Those who design and deploy generative AI models, such as Large Language Models like GPT-4 or image diffusion models like Stable Diffusion, can shape model behavior in four distinct stages: pretraining, fine-tuning, in-context learning, and input and output filtering. The four stages differ among many dimensions, including cost, access, and persistence of change. Pretraining is always very expensive and in-context learning is nearly costless. Pretraining and fine-tuning change the model in a more persistent manner, while in-context learning and filters make less durable alterations. These are but two of many such distinctions reviewed in this Essay.
Legal scholars, policymakers, and judges need to understand the differences between the four stages as they try to shape and direct what these models do. Although legal and policy interventions can (and probably will) occur during all four stages, many will best be directed at the fine-tuning stage. Fine-tuning will often represent the best balance between power, precision, and disruption of the approaches.

Files

  • thumbnail for 10.52214|stlr.v25i2.12762 - 127626286.pdf 10.52214|stlr.v25i2.12762 - 127626286.pdf application/pdf 290 KB Download File

Also Published In

More About This Work

Published Here
May 23, 2025