If you're not finding the answer to your question on this website, this page will help you. If you're still unsure, don't hesitate to send your question to us directly.
The exact time of day might vary each day, but we'll crawl your website at most every 24 hours.
The DocSearch crawler is running on our own infra. It reads the HTML content from your website and populates an Algolia index every day. All you need to do is keep your website online, and we take care of the rest. To edit your configuration, please submit a pull request.
We know that paying for search infrastructure is a cost not all open source projects can afford. That's why we decided to keep DocSearch free for everyone. All we ask in exchange is that you keep the "Search by Algolia" logo displayed next to the search results.
If this is not possible for you, you're free to open your own Algolia account and run DocSearch on your own without this limitation. In that case, though, depending on the size of your documentation, you might need a paid account (free accounts can hold as much as 10k records).
We save the data we extract from your website markup, which we put in a custom JSON format instead of HTML. This is the data we put in the Algolia DocSearch index. The selectors in your config define what data to scrape.
As the website owner, we also give you access to the Algolia Analytics dashboard. This will let you have more data about the anonymized searches in your website. You'll see the most searched terms, or those that lead to no results.
With such analytics, you will better understand what your users are searching for.
If you don't have Analytics access, send us an email and we'll enable it.
The free DocSearch we provide will crawl documentation pages. To use it on other parts of your website, you'll need to create your own Algolia account and either:
Yes, but we do not recommend it.
Code samples are a great way for humans to understand how people use a specific method. It often requires boilerplate code though, repeated across examples, which adds noise to the results.
What we recommend instead is to exclude the code blocks from the indexing (by
selectors_exclude option in your config), and instead structure your
content so the method names are present in the headers.
This can happen when you have more than one URL pointing to the same content,
for example with
You set the
stop_urls to all the patterns you want to exclude. The following
example will exclude all URLs ending with
Changing your setting from the dashboard might be something you want to do for some reasons .
Every successful crawl sets the DocSearch settings. These settings will be overridden at the next crawl. We do not recommend to edit anything from the dashboard. These changes come from the JSON configuration itself.
You can use the custom_settings parameter for such purpose.
We'd love to help!
If one of your favorite tool documentation websites is missing DocSearch, we encourage you to file an issue in their repository explaining how DocSearch could help. Feel free to send us an email as well, and we'll provide all the help we can.
nb_hits property in your configuration keeps track of the number of
records the crawl has extracted and indexed by the last DocSearch run. A crawl
updates this number automatically.
The DocSearch scraper follows the recommended atomic-reindexing strategy.
It creates a brand new temporary index to populate the data scraped from your
website. When successful, the crawl overwrites the old index defined in your
configuration with the key
We are scraping your website according to your configuration. It might happen that some pages are missing from the search. Some possible reasons for that are:
- Makes sure you are not filtering on the search by wrongly using
facetFilters. See here for more details.
- Make sure that an other indexed page references the page missing with an
- Make sure you are providing a compliant sitemap from the configuration and that it references the page.
No you can't. You should be aware that we made every crawls in a day. The
position of your crawl in the queue is the lexicographic order of your
index_name amongs the whole list of featured website.
If none of the previous points help, you can contact our support.
We build this website with Docusaurus v2. We were helped by a great man who inspired us a lot, Endi. We want to pay a tribute to this exceptional human being that will be always part of the DocSearch project. Rest in peace mate!
We are pre-releasing the v3 on docusaurus 2. It will help us to iterate faster on it and make sure we are ready to release a vanilla version. We will provide a migration guide to help you move on this new version. If you want to have more information on this version, you can watch the search party we made about this topic.
Can I share the
apiKey in my repo?#
apiKey the DocSearch team provides is a search-only key and can be
safely shared publicly. You can track it in your version control system (e.g.
git). If you are running the scraper on your own, please make sure to create a
search-only key and do not share your Admin key.