1. Computing

What Are Search Engine Spiders?

By

A spider, also known as a robot or a crawler, is actually just a program that follows, or "crawls", links throughout the Internet, grabbing content from sites and adding it to search engine indexes.

Spiders only can follow links from one page to another and from one site to another. That is the primary reason why links to your site (inbound links) are so important. Links to your website from other websites will give the search engine spiders more "food" to chew on. The more times they find links to your site, the more times they will stop by and visit. Google especially relies on its spiders to create their vast index of listings.

Spiders find Web pages by following links from other Web pages, but you can also submit your Web pages directly to a search engine or directory and request a visit by their spider. In fact, it's a good idea to manually submit your site to a human-edited directory such as Yahoo, and usually spiders from other search engines (such as Google) will find it and add it to their database. It can be useful to submit your URL straight to the various search engines as well; but spider-based engines will usually pick up your site regardless of whether or not you've submitted it to a search engine.

©2014 About.com. All rights reserved.