April 16, 2026
API, Research/10 minutes read
A good credit analyst reviews maybe eight names a week, and the process they’ve built over the years (the order they look at ratios, which covenants matter, which pieces of management commentary are worth re-reading) is repeatable in their own hands and essentially nowhere else. Scaling it across a 200-name book has historically meant hiring more analysts and hoping each one runs it the same way, which never quite holds up, because no two analysts converge on the same checklist even when they think they have.
We built the Workflows API to close that gap. You express your research approach as a reusable specification, a template, and run it across any entity in our Knowledge Graph. What comes back is a structured report with every claim tied to the source document that produced it, and with the same analytical framework applied to every name you run it against.
What it is
A workflow template is a JSON object that describes a piece of research. It names the analytical dimensions that matter, declares the entity you want to investigate, and specifies the shape of the output. You write it once and run it anywhere. Each template runs on an orchestration layer that dispatches focused sub-agents against Bigdata’s 1B+ document corpus, including news, filings, transcripts, expert calls, and structured data. Findings are then cross-referenced and synthesized into one report.Why it looks the way it does
Reproducibility over prompting
A prompt is a one-shot. Ask the same question twice and you get two different answers with different framing and different coverage, because the model has no obligation to converge on the same structure. That’s fine for exploration, and it breaks down the moment you try to run the same analysis across a coverage universe. Templates capture the framework itself. The entity slot changes from run to run; the analytical structure is fixed. A credit analyst running quarterly reviews across 50 issuers gets the same sections in the same order for every name, with the same ratios computed against the same thresholds, so findings stack across the book rather than living as one-off memos that can’t be lined up next to each other. The alternative has historically been hiring more analysts and accepting that each one applies the framework a little differently. With templates, the process is portable, and the 50th name gets the same treatment the first one did.Grounded, auditable output
Financial research without citations is fiction. If an analyst cannot click through from a number in the report to the filing or transcript that produced it, the report doesn’t move off the page into a recommendation, and nobody is banking their career on an unsourced claim. Every response carries structured grounding metadata attached to each claim, including the source document, the exact excerpt, the timestamp, the document type, and the URL. Every sentence that makes a factual assertion is tied back to the passage that supports it. When compliance asks where a number came from, the answer is one field away in the response. When a PM wants to read the original language before acting on a signal, the passage is already linked. Full audit control over the output is the bar any institutional workflow has to clear.Depth across many dimensions, in parallel
Covering a lot of ground and going deep at the same time is the bit most agent systems get wrong – they flatten out. The Workflows API runs each analytical dimension as its own focused investigation and brings the findings together at the end. An earnings preview template dispatches one sub-agent per independent research step, each with a clean context and one question to answer. One agent is pulling the eight-quarter surprise history. Another is extracting management’s guidance from the last four calls and comparing it to what actually came in. A third is mapping the current consensus and flagging recent revisions. The 8-K trawl runs at the same time, not waiting in a queue behind the others. Two hours of night-before prep for an analyst completes in minutes, with every section driven by its own focused search rather than one generalist agent trying to hold the whole picture in its head. And because Bigdata’s archive goes back more than twenty years, a surprise-history step isn’t limited to eight quarters if you want to see how a name behaved through a full cycle.How it works
Here is a compact earnings preview template. The shape matches what production templates look like, trimmed for readability.expected_input block declares that this template takes a company, specifically an rp_entity_id from our Knowledge Graph. That ID is the canonical handle for the entity, so the sub-agents get the full entity context (name, ticker, sector, descriptive metadata) rather than a raw string they have to disambiguate on their own. The research_plan.steps are written the way a PM would brief an analyst, specifying what to find and leaving the how to the agent. The freshness_boost tells the search layer to weight recent documents more heavily, which is the right call for an earnings preview where the last 30 days of news carries most of the signal.
Execution is a single endpoint. POST /v1/workflow/execute takes either a stored template ID or an inline definition, and the response streams back. You receive progress events as each step starts and completes, then the final report with inline grounding. Event formats and the full template schema are covered in the Workflows API quickstart and Workflows API reference.