A quote from the book helps visualizing the true figures of a Website's visitors:
"One could legitimately wonder if the counted visits appearing in Table 7-6 can be attributed to actual people or to robots, spiders, crawlers and the likes. Robots are usually small applications designed to gather data for search engine indexes. Unfortunately, the record of a virtual visit by a robot, spider or crawler looks no different in a Web server transaction log than that of a visit by an actual person. Robot visits may be traceable through name identifiers (i.e., Googlebot), or by a high volume of pages accessed in a very small timeframe, or by their requests for a file on the Web site called 'robots.txt.' The latter normally describes what may or may not be indexed from a given Web site by search engines. Nevertheless, if found, visits by robots must be systematically filtered and separately counted or just ignored when assessing the real usage impact of a Web site."
Continue reading this from the book on E-Metrics and the details @ eMetrics or Web-Metrics or Webometrics - A new book for Library & Information Professionals
Saturday, October 18, 2008
Subscribe to:
Post Comments (Atom)
Labels
Best Practices
(76)
Knowledge Management
(56)
Communities of Practice
(50)
Information Management
(47)
Business Intelligence
(35)
Competitive Intelligence
(33)
Knowledge Organization
(28)
Communication
(24)
Librarians
(16)
Professional development
(15)
Library
(14)
Semantic Web
(13)
Wiki
(11)
Education
(10)
Search Engines
(8)
Special Library Association
(8)
knowledge work
(8)
Google
(7)
Best Practices; Laws
(6)
Project Management
(6)
Tacit
(6)
blogging
(6)
career
(6)
Design
(5)
Digital Libraries
(5)
Marketing
(5)
Oral
(5)
Internet
(4)
Leaders
(4)
Classification
(3)
Content Management
(3)
Epistemology
(3)
Facebook
(3)
Information Industry
(3)
Reference
(3)
Share
(3)
Society
(3)
Spirituality
(3)
Technology
(3)
Web Analytics
(3)
Business--Religious aspects
(2)
Capture
(2)
Citation Analysis
(2)
Collection Development
(2)
Cyber_Worship_Inside
(2)
Data mining
(2)
Media monitoring
(2)
Netizens
(2)
Religion online
(2)
Research
(2)
Resource of the Week
(2)
Serial Subscription
(2)
SharePoint
(2)
Social Networking
(2)
Social Sciences
(2)
Visual Search
(2)
promotion
(2)
searching
(2)
Academic Libraries
(1)
Blog Reviews
(1)
Cloud
(1)
Collective Intelligence
(1)
Copyright
(1)
CyberWorship
(1)
Disseminate
(1)
FAQ
(1)
Fraud research
(1)
History
(1)
Knowledge Centres
(1)
Knowledge Maps
(1)
Library Vendors
(1)
Mapping
(1)
Online Religion
(1)
Questions
(1)
Retrieve
(1)
Scanner
(1)
Site vistors
(1)
Slide show
(1)
Stock investing
(1)
Stocks
(1)
Store
(1)
Terminology
(1)
Tools
(1)
User experience librarian
(1)
Website visits
(1)
customer privacy
(1)
information literacy
(1)
jobs
(1)
keywords
(1)
library resources
(1)
metadata
(1)
optical character recognition (OCR)
(1)
paid content
(1)
privacy
(1)
records management
(1)
web history
(1)
· Semantics
(1)
2 comments:
Tag based solutions, like VisiStat do not count spiders and bots. It gives a good representation of what humans are doing on websites. www.visistat.com
Anony (I am sure you are not a robot or spider... kidding).
Thanks for the insight. Will be happy if you send me an email and that will give us an opportunity to enhance our understanding of fact and fiction.
Post a Comment