Though many believe in Grub's novel distributed computing system, the search engine has its share of opponents. Many state that a large cache is not the strength of a good search engine, rather, that it is the ability to deliver accurate, precise results to users. Loyal fans of Google state that they enjoy that search engine for its targeted results and would not switch to Grub unless its search technology were superior to Google's. Quite a few webmasters[?] are opposed to Grub for its apparent ignorance of sites' robots.txt files. These files can prevent robots from caching certain areas. Because Grub, as its developers claim, also caches robots.txt, changes to the file may not be detected. Webmasters counter that Grub does not understand long-lasting robots.txt files blocking access to all crawlers. According to Wikipedia's own webmasters, the /w/ directory, which stores the scripts for page-editing, etc. and is blocked to robots by robots.txt, is cached by Grub but no other search engine. Wikipedia's webmasters also complain that Grub's distributed architecture creates server overload by keeping open a large number of TCP connections — the effects of this are essentially the same as a typical distributed denial of service attack.
Search Encyclopedia
|