{"image_url":null,"published":"2018-03-01 00:00:00","blog_title":"himaginary\u2019s diary","width":"100%","blog_url":"https://himaginary.hatenablog.com/","type":"rich","provider_name":"Hatena Blog","description":"\u3068\u3044\u3046NBER\u8ad6\u6587\u3092\u30c8\u30ed\u30f3\u30c8\u5927\u306eJoshua Gans\u304c\u4e0a\u3052\u3066\u3044\u308b\uff08ungated\u7248\uff09\u3002\u539f\u984c\u306f\u300cSelf-Regulating Artificial General Intelligence\u300d\u3067\u3001Gans\u306f\u6628\u5e7411/15\u306eDigitopoly\u30a8\u30f3\u30c8\u30ea\u3067\u5185\u5bb9\u3092\u7d39\u4ecb\u3057\u3066\u3044\u308b\u3002 \u4ee5\u4e0b\u306f\u8ad6\u6587\u306e\u8981\u65e8\u3002 This paper examines the paperclip apocalypse concern for artificial general intelligence. This arises when a superintelligent AI with a simple goal (ie.,\u2026","html":"<iframe src=\"https://hatenablog-parts.com/embed?url=https%3A%2F%2Fhimaginary.hatenablog.com%2Fentry%2F20180301%2FSelf_Regulating_Artificial_General_Intelligence\" title=\"\u81ea\u5df1\u5236\u5fa1\u3059\u308b\u6c4e\u7528\u4eba\u5de5\u77e5\u80fd - himaginary\u2019s diary\" class=\"embed-card embed-blogcard\" scrolling=\"no\" frameborder=\"0\" style=\"display: block; width: 100%; height: 190px; max-width: 500px; margin: 10px 0px;\"></iframe>","url":"https://himaginary.hatenablog.com/entry/20180301/Self_Regulating_Artificial_General_Intelligence","version":"1.0","author_name":"himaginary","categories":["\u7d4c\u6e08"],"title":"\u81ea\u5df1\u5236\u5fa1\u3059\u308b\u6c4e\u7528\u4eba\u5de5\u77e5\u80fd","author_url":"https://blog.hatena.ne.jp/himaginary/","height":"190","provider_url":"https://hatena.blog"}