{"provider_url":"https://hatena.blog","author_url":"https://blog.hatena.ne.jp/BioErrorLog/","title":"Understanding 1-bit LLMs | Paper Notes: The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits","provider_name":"Hatena Blog","image_url":"https://cdn-ak.f.st-hatena.com/images/fotolife/B/BioErrorLog/20240412/20240412155607.png","html":"<iframe src=\"https://hatenablog-parts.com/embed?url=https%3A%2F%2Fen.bioerrorlog.work%2Fentry%2F1-58bit-llm-paper\" title=\"Understanding 1-bit LLMs | Paper Notes: The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits - BioErrorLog Tech Blog\" class=\"embed-card embed-blogcard\" scrolling=\"no\" frameborder=\"0\" style=\"display: block; width: 100%; height: 190px; max-width: 500px; margin: 10px 0px;\"></iframe>","categories":["AI","LLM","Papers"],"height":"190","version":"1.0","published":"2025-12-19 07:41:03","url":"https://en.bioerrorlog.work/entry/1-58bit-llm-paper","width":"100%","type":"rich","description":"This is a summary of the paper \"The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits\". Introduction The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits Overview Method Results Conclusion/Thoughts References Introduction The paper covered in this summary: arxiv.org Publishe\u2026","blog_url":"https://en.bioerrorlog.work/","author_name":"BioErrorLog","blog_title":"BioErrorLog Tech Blog"}