find_sources
method to return the favicon URL.chat.ask
method to prevent the following error when establishing the connection:Any
. Please see an example in Query filters > Watchlist.chat.ask
now supports source filtering. Check the parameter source_filter
in the Chat API Reference for more details.file.wait_for_analysis_complete()
. Check Upload Content for more details.ChatScope.EARNINGS_CALL
: It uses transcripts from the Quartr subscription.ChatScope.FACTSET_TRANSCRIPTS
: It uses transcripts from the FactSet subscription.file.wait_for_completion()
now waits for the file to be fully processed, including the indexing step if required. Check Upload Content for more details.
2400
seconds (40 minutes), but you can customize it with the timeout
parameter (In seconds).DocumentVersion
chat.ask
requestschat.ask
supports streaming with the parameter streaming
. More details in Chat API Referenceowned
parameterasc
order. SortBy.DATE_ASC
scope
parameter to Chat.ask
methodfind_etfs
BigdataClientIncompatibleStateError
is raised when trying to tag or share a file which upload and classification process is not COMPLETEDBigdataClientSimilarityPayloadTooLarge
to notify users when the input from a Similary query is too large.rerank_threshold
param to bigdata_client.search.new
.RequestMaxLimitExceeds
raised from 8KB -> 64KB.bigdata_client.document.download_annotated_dict
method.BigdataClientIncompatibleStateError
.bigdata_client.uploads.list_my_tags
. (List my tags.bigdata_client.uploads.list_tags_shared_with_me
. (List tags shared with me).get_companies_by_isin
get_companies_by_cusip
get_companies_by_sedol
get_companies_by_listing
subscription.get_details()
will include information about uploaded pages of PDF files: pdf_upload_pages
. Check Monitor usage.company_shared_permission
field.subscription.get_details()
will correctly inform about uploaded pages of files (other than PDF): file_upload_pages
.batch_file_analytics_download.py
and batch_file_upload.py
now correctly handles BigdataClientRateLimitError
.FileTag
, usage here.BigdataClientAuthFlowError
, BigdataClientTooManySignInAttemptsError
bigdata_client.subscription.get_details()
method.
add_tags
, remove_tags
and set_tags
for File
class objects. These methods allow modifying the tags on existing files.
bigdata.upload.Uploads.list
method to retrieve all the files uploaded by the user.
Example of usage:
bigdata_client.models.entities.Concept
in the knowledge graph service since v2.1.0.bigdata_client.models.entities.Topic
were incorrectly parsed as bigdata_client.models.entities.Concept
.bigdata_client.models.sources.Source
in the knowledge graph service.subscription.get_details()
for API usage monitoring.get_usage()
at the class bigdata_client.search.Search
that returns the API Query units used for each search instance.autosuggest
, find_concepts
, find_companies
, find_people
,
find_places
, find_organizations
, find_products
, find_sources
and find_topics
should now be used with a single
parameter instead of a list. E.g:
Example of current usage:
deprecated
and compatibility with it will be removed in the future. This update
fixes problems from clients using the methods above when using concurrency.
A more detailed guide is included in the knowledge graph documentation and how to guides.
Fixed
Enhance methods in the knowledge_graph service to avoid errors when customers explore the Knowledge Graph using a multithreading environment.
url
property to Document
classAbsoluteDateRange
accept now to be created with a timezone.post
, patch
and put
requests that json request body not exceeds 8KBverify_ssl
parameter for bigdata_client.Bigdata
class to be able to skip ssl verification for proxybigdata
to bigdata_client
.
We are doing this to ensure that there are no conflicts with other commonly used Python packages.
This change will require action on your part.
What you need to do:
Currently, you import classes from the package bigdata
.
Once you update the package to version 2.0.0, be sure to modify your Python scripts to import
classes from bigdata_client
instead of bigdata
to avoid any issues.
Example of current imports:
download_annotated_dict
method to Document
classBIGDATA_MAX_PARALLEL_REQUESTS
.skip_metadata: Optional[bool]
to Uploads.upload_from_disk
, it allow skip loading file metadata
when uploading file.bigdata.query
/api
module.get_entities()
, get_sources()
, and get_topics()
now return analytic descriptions when available.BIGDATA_USER
has been renamed to BIGDATA_USERNAME
. Old version
is still supported but marked as deprecated.Bigdata.search.new()
.Document
object. It contains related Document
objects.search.find_concepts
to return the first concepts from the autosuggest service.Company
, Concept
, Facility
, Landmark
, Organization
, OrganizationType
, Person
, Place
, Product
, ProductType
from bigdata.date
.Document
object. It contains related Document
objects.File
object or using bigdata.uploads
new methods.company_shared_permission
to know if they are being shared or not.bigdata.uploads.list_shared
.company_shared_permission
to know if they are being shared or not.SharePermission.READ
instead of SearchSharePermission.READ
Bigdata.content_search
renamed to Bigdata.search
ContentSearch.new_from_query
renamed to ContentSearch.new
any_
, all_
renamed to Any
, All
Document("BFA16B80ED117EAA5693E8BA")
find_companies, find_people, find_places, find_organizations, find_products, find_sources, find_topics
.Search.run
DocumentTypes
-> TranscriptTypes
.FileType
-> DocumentType
.TranscriptTypes.EARNINGS_CALL
SectionMetadata.QUESTION
Search.limit_stories
-> Search.limit_documents
Story
-> Document
as well as all of its components to be named like “Document”Document
people
was not being returned by comentions.Search.share_with_company
and Search.unshare_with_company
to share and
“unshare” a search with the user’s company ([#11]).File.get_analytics_dict
to get the analytics directly in
memory, as a dictionary ([#14]). That is consistent with other methods like
File.get_annotated_dict
.Story.chunks
([#13]).Story.sentiment
and Chunk.sentiment
to be between -1
and 1 ([#13]).relevance
attribute, which is a
float bigger than 0. You can access it on story.chunks[x].relevance
([#10]).Bigdata.content_search.new_search
, in favor of the more powerful
Bigdata.content_search.new_from_query
([#9]).scope
parameter of
Bigdata.content_search.new_from_query
was being ignored, causing the search
to always have a scope FileType.ALL
([#12]).Bigdata.internal_content
to support interacting with internal
content (uploading documents, listing, getting, deleting, downloading
original document/annotations/analytics, etc) ([#7]).Bigdata.watchlists
to create, update, delete get and list
watchlists ([#5]).ValidationError
.new_from_query
from the Search
class. ([#3]).Story.__str__
so it’s printed when using print(story)
sortby
to the new_search
method__str__
to have a better
looking output.get_related
renamed to get_comentions
AbsoluteDateRange
’s __init__
now accepts both types datetime
and str
.from_strings
has been removed in favor of __init__
passing strings.bigdata.content_search.new_search()
.