{"categories":["English post","IT","AI"],"author_name":"espio999","blog_title":"Technically Impossible","provider_url":"https://hatena.blog","provider_name":"Hatena Blog","height":"190","url":"https://impsbl.hatenablog.jp/entry/GPTOnLinuxPCwith8GB-RAM_en","html":"<iframe src=\"https://hatenablog-parts.com/embed?url=https%3A%2F%2Fimpsbl.hatenablog.jp%2Fentry%2FGPTOnLinuxPCwith8GB-RAM_en\" title=\"GPT on Linux PC with 8GB RAM, No GPU, No Container, and No Python - Technically Impossible\" class=\"embed-card embed-blogcard\" scrolling=\"no\" frameborder=\"0\" style=\"display: block; width: 100%; height: 190px; max-width: 500px; margin: 10px 0px;\"></iframe>","version":"1.0","blog_url":"https://impsbl.hatenablog.jp/","type":"rich","title":"GPT on Linux PC with 8GB RAM, No GPU, No Container, and No Python","published":"2023-03-21 00:00:00","width":"100%","image_url":"https://cdn-ak.f.st-hatena.com/images/fotolife/e/espio999/20230319/20230319215046.png","author_url":"https://blog.hatena.ne.jp/espio999/","description":"2023-03-23 Performance of prediction in this post is not practical, a single token per a minute. The next post shows the other case. Its performance is better due to small size language model. If you are interested, please refer to it. impsbl.hatenablog.jp Abstract One of the popular topic in \"AI\" i\u2026"}