Latest Stats as at 3/10/2008. These figures cover the 15-day period 16/9/2008 to 30/9/2008. Vista Market Share ================== For period 16th to 30th September 2008, figures based on visits:- XP 69.2 (was 70.4, 71.6, 71.8) Vista 18.3 (was 17.5, 15.9, 15.8) Other Windows 4.5 (was 4.5, 4.0, 4.5, 5.6) Mac 6.1 (was 6.0, 6.3, 5.8) Linux 1.1 (was 1.2, 1.4, 1.4) Others 0.7 (was 0.4, 0.7, 0.6) Vista continues to increase. The "others" category has rebounded back to 0.7. Earlier versions of Windows are hanging on there at 4.5%. Browser Breakup =============== For period 16-30 September: IE 69.9 (was 69.7, 70.5, 71.4 and 72.0) FF 22.3 (was 22.5, 21.7, 21.3) Safari 4.8 (was 4.4, 4.8, 4.8) Opera 0.7 (was 0.9, 1.0, 1.1) Chrome 0.7 (was 0.8) All Others 1.7 (was 1.6, 1.9, 1.7) These figures are based on analysis of the caller's IP address. Opera is a whisker ahead of Chrome. Chrome seems to have lost its initial shine a bit and pro-M$ journos are still writing those anti-Chrome articles. Expect to read massive praise for IE8 as it copies features already in Firefox. Firefox seems to have stalled. This is despite my reading that Firefox (with the no-scripts add-in) is the most secure browser in the world. It is also far more pleasant to use than IE, as it remembers what you did last time far better than IE. Hits from IE8 have jumped again in the last two weeks (58 now, previously 34 hits, earlier just one or two). Firefox 3 Takeup ================ FF1 2.7 (was 3.4, 3.7, 3.8) FF2 30.1 (was 33.8, 55.9, 61.2, 65.1, 73.1) FF3 67.2 (was 62.8, 40.4, 35.1, 30.5) All this is now pretty much a non-issue as the takeup of Firefox 3 is nearly accomplished. I read that FF did some sort of campaign on FF2 users to get them to convert to FF3. The alarming problem here is Linux. The figures are FF1 19%, FF2 53% and FF3 27% and such poor results are not measured for Vista or Mac. The problem is that many current distros still have only Firefox 2, and I suspect many current Linux users are either satisfied with FF2 or don't really know how to upgrade to FF3. They are still running whatever browser it came with. Perhaps it's just a live CD in many cases, not a proper installation. Search Engines Share ==================== September 16th to 30th. Google 86.4 (was 87.8, 87.2, 84.5, 85.7, 86.6, 86.2) Yahoo 7.8 (was 6.8, 7.3, 7.3, 8.0, 7.5, 7.8) Microsoft 2.3 (was 1.7, 2.0, 3.3, 2.6, 2.1, 2.3) All others 3.5 (was 3.6, 3.5, 5.0, 3.7, 3.7) Yahoo and M$ have rebounded since last fortnight's all time low results. There are no "Google killers" on the horizon. Cuil (pronounced cool, a silly name if ever I heard one) has produced one visit in two weeks compared to over 7000 from Google. The above figures are based on visits, but I am having a serious calculation problem when Google is both answering the search request and providing the page translation service. This translation is extremely common, especially from China and Germany. The upshoot is that the figures for M$ and Yahoo are probably correct, but the total visits counted when using Google as the search engine is a little low. Very roughly, the figure above of 86.4 for Google might be nearer 87.0. This is not quick to fix. Linux Distro Analysis ===================== Nothing to report. Hacker Analysis =============== Nothing to report. Search Strings =============== I now have a database table containing over 61,000 different search strings, collected over 16 months. "Bondi Beach" is the most popular search string by a country mile, and speaking of country, the search strings include "worst country music songs of all time" and "I hate country music because country music sucks". Analysing these strings into groups, topics or categories (such as surfing, what's on, local history, accommodation at Bondi etc) has turned out to be almost impossible in the long term - it needs some sort of self-teaching programme that can see which new strings are similar to earlier strings and are thus in the same category, as opposed to the programmer defining precisely which search strings are in which broad categories. Research oriented articles say the same thing - the program has to spot the patterns. I have recently read an article about decentralised traffic flow, as opposed to the current model of centralised traffic control rooms and the regime of 40 seconds green for the main road and only 20 seconds green for the cross street etc etc. Instead, the sensors in the road from several sets of lights make bids to a local computer such as "I have three cars waiting and I would like a green in 10 seconds". The computer then does the best it can with the conflicting bids. The result is greatly improved peak hour traffic flow, and sitting at red lights in the middle of the night is a thing of the past. So, the long term plan has to be a programme to analyse search strings and automatically decide new or emerging topics. Later the results are used to add better keywords back into the text on pages to reflect what people are looking for. ================================================================