{"blog_title":"c4se\u8a18\uff1a\u3055\u3063\u3061\u3083\u3093\u3067\u3059\u3088\u2606","provider_url":"https://hatena.blog","version":"1.0","type":"rich","html":"<iframe src=\"https://hatenablog-parts.com/embed?url=https%3A%2F%2Fc4se.hatenablog.com%2Fentry%2F2013%2F01%2F24%2F175114\" title=\"#golang \u00a769 Exercise: Web Crawler - c4se\u8a18\uff1a\u3055\u3063\u3061\u3083\u3093\u3067\u3059\u3088\u2606\" class=\"embed-card embed-blogcard\" scrolling=\"no\" frameborder=\"0\" style=\"display: block; width: 100%; height: 190px; max-width: 500px; margin: 10px 0px;\"></iframe>","blog_url":"https://c4se.hatenablog.com/","categories":["Programming","Golang"],"provider_name":"Hatena Blog","width":"100%","height":"190","url":"https://c4se.hatenablog.com/entry/2013/01/24/175114","description":"A Tour of Go 69 Exercise: Web Crawler In this exercise you'll use Go's concurrency features to parallelize a web crawler. Modify the Crawl function to fetch URLs in parallel without fetching the same URL twice. Example package main import ( \"fmt\" ) type Fetcher interface { // Fetch returns the body \u2026","author_url":"https://blog.hatena.ne.jp/Kureduki_Maari/","title":"#golang \u00a769 Exercise: Web Crawler","author_name":"Kureduki_Maari","published":"2013-01-24 17:51:14","image_url":null}