Connect with us

Click here to join NNU for free and make money while reading news and getting updates daily.

Pro

‘No one knows what makes humans so much more efficient’: small language models based on Homo Sapiens could help explain how we learn and improve AI efficiency — for better or for worse

Published

on



Tech companies are shifting focus from building the largest language models (LLMs) to developing smaller ones (SLMs) that can match or even outperform them. 

Meta’s Llama 3 (400 billion parameters), OpenAI’s GPT-3.5 (175 billion parameters), and GPT-4 (an estimated 1.8 trillion parameters) are famously larger models, while Microsoft‘s Phi-3 family ranges from 3.8 billion to 14 billion parameters, and Apple Intelligence “only” has around 3 billion parameters.



Source link: TechRadar

Advertisement
Continue Reading
Advertisement