Web crawler
A web crawler or spider is a computer program that automatically fetches the contents of a web page. The program then analyses the content, for example to index it by certain search terms. Search engines commonly use web crawlers.[1]
Related pages
- HTTrack – a web crawler released in 1998
References
- Masanès, Julien (February 15, 2007). Web Archiving. Springer. p. 1. ISBN 978-3-54046332-0. Retrieved April 24, 2014.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.