
"While OpenAI and Anthropic continue begging for more and more investor cash in the face of consistently lackluster earnings, some vendors delivering advanced AI to the legal industry dropped hints about growing interest in small models. It's not that large language models don't work - though they often don't - but they're overbloated science experiments that, as Goldman Sachs observed, require exponentially increased resources to achieve tiny linear gains."
"Some could be light enough to run on institutional hardware, meaning law firms and corporate clients can keep their data in-house instead of shipping it off to Silicon Valley narcs. For an industry that still treats the cloud like it's a Soviet spy balloon - an overreaction, but a persistent one - the pitch for small models is obvious: more control, less spend, nearly the same output."
At ILTACON, legal technology professionals signaled rising interest in smaller AI models as alternatives to large, resource-hungry language models. Investors continue funding major LLM vendors despite weak earnings, while some legal-focused vendors promote compact models that deliver comparable outputs with lower cost and resource demands. Smaller models can run on institutional hardware, allowing law firms and corporate clients to keep data in-house. The industry’s cloud skepticism makes local hosting attractive. Meta announced a small reasoning model intended for local hosting and specialized uses like math and coding, and a live demo failure for a general AI offering underscored practical risks of large-scale approaches.
Read at Above the Law
Unable to calculate read time
Collection
[
|
...
]