- API Query Unit: Allow customers to retrieve data from Bigdata.com.
- Pages of uploaded files: Allow customers to upload files to Bigdata.com.
- PDF format files
- Other format files (CSV, XML, JSON, HTML, TXT, DOCX)
- In standard contracts the platform considers a page as group of 3000 characters.
We encourage you to monitor usage and contact us if you would like
assistance in choosing the right subscription plan for your
organization.
We recommend to try this how-to guide on
Otherwise follow the how-to guide Prerequisites instructions to set up your Python SDK environment in your local environment.
Subscription level
Use the methodget_details
to retrieve subscription details:
API Query Unit
Bigdata.com measures the amount of retrieved data with API Query Units; each unit allows retrieval of 10 text chunks.How many API Query Units consume a Search?
How many API Query Units consume a Search?
The method
search.run()
accepts a parameter to specify the number of
documents or chunks to retrieve. Every ten retrieved chunks count as one
API Query Unit.How many API Query Units consume a Search?
How many API Query Units consume a Search?
You can control the usage by specifying the number of chunks to retrieve
with the parameter
ChunkLimit
:search.run(ChunkLimit(100))
will retrieve a maximum of 100 chunks and therefore will consume a maximum of 10 API Query Units. The response might contain a smaller number of chunks due to discarding duplicates, so the usage could be lower.
How can I see the API Query Unit usage per Search?
How can I see the API Query Unit usage per Search?
Check the usage of each search run at the Search level.
Pages of uploaded files
Bigdata.com measures the amount of uploaded files in pages.PDF format
Other format (CSV, XML, JSON, HTML, TXT, DOCX…)
Search level
Each newsearch
tracks the amount of retrieved data, and you can consult it at any time with the method get_usage()
Initially, the usage of a new search is 0
run()
, the method get_usage()
returns the used Query Units.
The response might contain a smaller number of chunks due to discarding duplicates, so the usage could be lower. Check the how-to guide Retrieve limited chunks for more details.