Understanding your Web
The 2 cents I wanted to put in has to do with the differences in comparing server side stat's
(awstats, webalizer, and analog) vs. script stat's (google analytics). Web server software like IIS
and Apache keep a log of the ip addresses that request information (html files, php files, image
files .. whatever) and store that information into log files. Server side stat's read those log
files and crunches it into graphs and information that is more easily understood by us humans. The
differences in server packages differ because whoever set the standard for crunching in separate
was explained above in that someone arbitrarily said this is relevant this way and not.
On the other hand is script type statistics, like Google analytics, which puts a special
gather any data. But there is a large advantage of using script based stat's in that script
this means the data in Google Analytics would be more relevant because you know that real eyeballs
the page and executed the script that got counted as a "hit" or "visit". Server side stats show all
the requests including search engine spiders and the like.
While packages like AWStats try to differentiate real people from non-people, the issue is that
server side stats somehow have to keep a list of what ip addresses spiders come from, or what
browser agent information is a spider and not some new browser like Google’s
new Crome browser. If a new spider was made and the server side stats didn’t know that it was a
new spider, how would it classify the visits from that visit?
Where as a script stats package would not see a spider hit since spiders hardly ever execute
Hope that helps some people understand the differences.