Wire HostMetadata discovery into DatabricksConfig#680
Merged
hectorcast-db merged 1 commit intomainfrom Feb 27, 2026
Merged
Conversation
tanmay-db
approved these changes
Feb 26, 2026
github-merge-queue bot
pushed a commit
that referenced
this pull request
Feb 27, 2026
## 🥞 Stacked PR Use this [link](https://github.com/databricks/databricks-sdk-java/pull/678/files) to review incremental changes. - [**stack/config-auto-complete-1**](#678) [[Files changed](https://github.com/databricks/databricks-sdk-java/pull/678/files)] - [stack/config-auto-complete-2](#679) [[Files changed](https://github.com/databricks/databricks-sdk-java/pull/679/files/82fbbc2e5c9b9f38705df9a88ad0b6898d793b82..37e2032fd767f99b0c52b6ea57f60dea92e516e1)] - [stack/config-auto-complete-3](#680) [[Files changed](https://github.com/databricks/databricks-sdk-java/pull/680/files/adbceef3cc10ffef3e5844fe09f58c210f922f6a..18b73249bb564d01fe237f2464145a2ca025e9b4)] --------- ## Summary Adds `HostMetadata` and a package-private `getHostMetadata()` on `DatabricksConfig` for parsing the `/.well-known/databricks-config` discovery endpoint. ## Why Databricks hosts expose a standard `/.well-known/databricks-config` endpoint that returns the OIDC endpoint, account ID, and workspace ID in a single request. The SDK had no primitive to consume it — OIDC endpoint discovery was handled entirely through host-type-specific logic that requires the caller to already know whether the host is a workspace, account console, or unified host. This PR introduces the foundational building block: a `HostMetadata` class and a package-private `getHostMetadata()` on `DatabricksConfig` that fetches and parses the endpoint. The method returns raw metadata with no substitution (e.g. `{account_id}` placeholders are left as-is), keeping it a pure discovery primitive. Callers decide how to interpret the result. ## What changed ### Interface changes - **`HostMetadata`** — New class in `com.databricks.sdk.core.oauth` with fields `oidcEndpoint`, `accountId`, `workspaceId`. Deserialized from JSON via Jackson. - **`DatabricksConfig.getHostMetadata()` (package-private)** — Fetches `{host}/.well-known/databricks-config` and returns a `HostMetadata`. Throws `DatabricksException` on any HTTP error. ### Behavioral changes None. No existing code paths are modified. ### Internal changes Tests in `DatabricksConfigTest` covering the two response shapes (workspace with static OIDC endpoint, account host with `{account_id}` template) and the HTTP error path. ## How is this tested? Unit tests in `DatabricksConfigTest` using `FixtureServer`. Both workspace and account host response shapes are exercised, plus an HTTP error case. NO_CHANGELOG=true
97939ac to
733d3ae
Compare
733d3ae to
77e9259
Compare
github-merge-queue bot
pushed a commit
that referenced
this pull request
Feb 27, 2026
## 🥞 Stacked PR Use this [link](https://github.com/databricks/databricks-sdk-java/pull/679/files) to review incremental changes. - [**stack/config-auto-complete-2**](#679) [[Files changed](https://github.com/databricks/databricks-sdk-java/pull/679/files)] - [stack/config-auto-complete-3](#680) [[Files changed](https://github.com/databricks/databricks-sdk-java/pull/680/files/3b4e6d886efaf9fdedf7de2a5d356ea7eea18617..733d3aefce14ba9aac8ecc82e694f9c2e85fda3e)] --------- ## Summary Adds tests covering OIDC endpoint resolution directly from a discovery URL, exercising the existing `discoveryUrl` config field and the `getDatabricksOidcEndpoints()` short-circuit path. ## Why The OIDC discovery URL returned by `/.well-known/databricks-config` points to a standard OAuth 2.0 authorization server metadata document (`authorization_endpoint`, `token_endpoint`). The Java SDK already has `discoveryUrl` (`DATABRICKS_DISCOVERY_URL`) and `getDatabricksOidcEndpoints()` fetching directly from it when set — this is the Java equivalent of the Python SDK's `get_endpoints_from_url()` addition. This PR adds the missing test coverage for those paths. ## What changed ### Interface changes None. The `discoveryUrl` config field (`DATABRICKS_DISCOVERY_URL`) and the `getDatabricksOidcEndpoints()` short-circuit were already present in the Java SDK. ### Behavioral changes None. No existing code paths are modified. ### Internal changes - `testDiscoveryUrlFromEnv` — verifies `DATABRICKS_DISCOVERY_URL` env var is loaded into `discoveryUrl`. - `testDatabricksOidcEndpointsUsesDiscoveryUrl` — verifies `getDatabricksOidcEndpoints()` short-circuits to fetch directly from `discoveryUrl` when set. ## How is this tested? Unit tests in `DatabricksConfigTest` using `FixtureServer`. NO_CHANGELOG=true
Adds package-private `resolveHostMetadata()` to `DatabricksConfig`.
When called, it fetches `/.well-known/databricks-config` and back-fills
`accountId`, `workspaceId`, and `discoveryUrl` (with any `{account_id}`
placeholder substituted) if those fields are not already set.
Raises DatabricksException if `accountId` cannot be resolved or if no
`oidc_endpoint` is present in the metadata. Mirrors
`Config._resolve_host_metadata()` in the Python SDK.
77e9259 to
8b0c7c5
Compare
|
If integration tests don't run automatically, an authorized user can run them manually by following the instructions below: Trigger: Inputs:
Checks will be approved automatically on success. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
🥞 Stacked PR
Use this link to review incremental changes.
Summary
Wires host metadata discovery into
DatabricksConfigvia a new package-privateresolveHostMetadata()helper, mirroringConfig._resolve_host_metadata()in the Python SDK.Why
Users configuring the SDK today have no way to let it auto-discover
accountId,workspaceId, or the OIDC endpoint from the host — they must supply each field manually. ThediscoveryUrlconfig property already allows providing a direct OIDC discovery URL;resolveHostMetadata()completes the picture by deriving all three fields automatically from the host's/.well-known/databricks-configendpoint.What changed
Interface changes
DatabricksConfig.resolveHostMetadata()(package-private) — CallsgetHostMetadata()on the configured host and back-fillsaccountId,workspaceId, anddiscoveryUrl(with any{account_id}placeholder substituted) if those fields are not already set. ThrowsDatabricksExceptionifaccountIdcannot be resolved or if nooidc_endpointis present in the metadata.Behavioral changes
resolveHostMetadata()is not called automatically — it is opt-in.Internal changes
Tests in
DatabricksConfigTest:testResolveHostMetadataWorkspacePopulatesAllFields— workspace populates all three fields.testResolveHostMetadataAccountSubstitutesAccountId— account host substitutes{account_id}in the OIDC endpoint.testResolveHostMetadataDoesNotOverwriteExistingFields— existing fields are not overwritten.testResolveHostMetadataRaisesWhenAccountIdUnresolvable— raises whenaccountIdcannot be resolved.testResolveHostMetadataRaisesWhenOidcEndpointMissing— raises whenoidc_endpointis absent.testResolveHostMetadataRaisesOnHttpError— raises when the well-known endpoint returns an error.How is this tested?
Unit tests in
DatabricksConfigTestusingFixtureServer.NO_CHANGELOG=true