Keep the search result page response time under (an average of) 0,2 seconds.
Our goal is to reduce the response time to 0,1 seconds (on average).
This is a very important principle for us.
Read more about the users experience of response times.
http://www.useit.com/alertbox/response-times.html
All sources/systems are in one index.
We have 10 different systems/information sources indexed, 4 more will be added in the coming months which adds up to more than 1 million documents in total for the index .
One basic search GUI for all systems/sources that are indexed.
This is sometimes called “universal search”.
All informationtypes can have a different set of facets.
Our goal is to also use “federated search”, meaning that a search query is sent to more than one search function (=more than one index/database). For example, a search query is sent to our search engine and it’s index and at the same time is sent to a database for scientific articles.
Search results are then blended in one search result page.
A scope is a defined smaller part of the larger index, could be seen as a chapter in a book.
A scope can overlap another scope.
A scope can also consist of several smaller scopes.
Limit the recall by using scopes.
Allow the organisation to have their own scopes in the search index .
Allow a specific subject (e.i. “HR”) to have a scope.
Limit the site-search to the site’s scope.
Use filters/facets to:
narrow the search
increase precision
“navigate” the search results
Not all results are the same.
Treat the individual search results differently on the same search result page.
Use Tiles to differentiate search results from different sources.
For example, an event is not shown the same way as a document.
Use key-matches to show a set first result, based on a given number of keywords.
Give everyone the possibility to:
access to basic search statistics
see all key-matches
suggest new key-matches
Allow users to give structured feedback.
This is done via a simple web form (that fetches a little information about the users operating system, web browser, IP-address etc in order to help with support issues and troubleshooting)
We get feedback, not a lot, but all the feedback so far has been useful. A few issues every week.
We always answer whoever has given us the feedback (if they filled out contact info).
This builds trust.
Typical feedback: A user can’t find a series of documents. We investigate, and address the problem. This is sometimes related to bugs, the relevancy model or the content itself that must be improved, eg. adding metadata, better headings etc. This often boils down to helping content editors and training them to do “the right thing”.