Not too long ago Google announced Google Gears which is actually pretty cool. It is open source and it allows for offline access to data that you would normally only have access to when you are connected to the internet. A great example of this is Google Reader which I think is the best feed reader out there. Once you download Google Gears and restart your browser (FF of course) you can then click on a little icon in the upper right section of your screen and it will download the 2000 most recent entries. I'm sure time varies depending on the size of the entries and your connection speed, but for me it took about 1 minute to get the 2000 most recent entries. Then you can disconnect from the network and go on your merry way. I think this is a big step in allowing users to access their data offline through web applications. Areas where this could be implemented by the end of the year include email, search, feed readers, etc... I'm interested in seeing how this can be implemented in search. Does it make sense to have the user download a small "master index" to their machine that they can search while offline? I think the answer is "it depends". It depends on what they are searching for, how big the overall index is, how far down they drill on results, and how many searches they do. You can't really build snippets for search results unless you know what the search is going to be for. But maybe it would be useful to have the web browser download (in the background) the results for the last 20 or so previous searches assuming that users execute the same search queries over and over again. And if they click on actual results then those full pages could be downloaded in the background too. Would this actually be useful? I'm not sure...