Dealing with Search Engine Spiders
Search engines gather information about websites through various means, an important one being the use of automated programs called spiders.
Spiders, as their name suggests, "crawl the web" by reading web pages and following links, building a database of content and other relevant information about particular websites. This database, better known as an index, is queried by search engine visitors using their key words and phrases and returns suggestions of relevant pages for them to visit.
This can create a problem for highly dynamic sites, which rely on user interaction (rather than passive surfing) to invoke the loading of new content delivered on-demand by the server. The visiting spider may not have access to the content that would be loaded by dynamic means and therefore never gets to index it.
The problem can be exacerbated further by the use of Ajax, with its tendency to deliver even more content in still fewer pages.
It would seem wise to ensure that spiders can index a static version of all relevant content somewhere on the site. Because spiders follow links embedded in pages, the provision of a hypertext linked site map can be a useful addition in this regard.