These prebuilt wheel files can be used to install our Python packages as of a specific commit.
Built at 2026-04-26T20:52:34.388525+00:00.
{
"timestamp": "2026-04-26T20:52:34.388525+00:00",
"branch": "worktree-feat+local-embedding-provider",
"commit": {
"hash": "1afc93fe230fadc3888a0d3f33a4a11663e1e7ed",
"message": "fix(search): address code review feedback on local embedding provider\n\n- Split LocalEmbeddingProvider timeout into CONNECT_TIMEOUT=10s and\n REQUEST_TIMEOUT=120s; cold GGUF model loading can take 60s+ so a longer\n read timeout is needed while the connect timeout stays short\n- Fix ConnectException detection: HttpClient sometimes wraps it inside\n IOException; add getCause() check so the helpful \"ollama serve\" hint\n fires in both cases; extract to newConnectError() helper\n- Add testWrappedConnectException and testIoExceptionRetryExhausted tests\n (15 total, was 12)\n- Fix _validate_provider_config: add 'local' branch so test_connection\n correctly reports capability instead of always returning False\n- Fix SemanticContent model_key: use model_embedding_key from server config\n when available (authoritative), fall back to derivation only when not set\n- Extract _LOCAL_EMBEDDING_DEFAULT_ENDPOINT constant in chunking_config.py\n to keep Java and Python defaults in sync\n- Ollama-model-init: add warmup embedding request after model pull so GGUF\n is loaded into memory before the container exits; add restart: \"no\"\n- application.yaml: make nomic_embed_text vectorDimension configurable via\n LOCAL_EMBEDDING_VECTOR_DIMENSION env var\n- datahub_dev.py: add --no-ai flag to clear AI env vars; add\n --embeddings-endpoint (BYO server, skips Ollama container) and\n --embeddings-model flags; add _wait_for_ollama_model_ready() probe so\n 'start --ai' blocks until model is loaded and the first search query\n is warm; future AI capabilities (chat etc.) can add --chat-endpoint\n\nCo-Authored-By: Claude Sonnet 4.6 "
},
"base": {
"hash": "34c878a4484f895ba65e87a35ff4c4760252f6f2",
"message": "ci(security): Trivy/Grype scan workflow, Linear sync, and registry profiles (#17159)"
},
"pr": {
"number": 17201,
"title": "feat(search): add local embedding provider for on-premise semantic search (Ollama)",
"url": "https://github.com/datahub-project/datahub/pull/17201"
}
}
Current base URL: unknown
| Package | Size | Install command |
|---|---|---|
acryl-datahub |
3.590 MB | uv pip install 'acryl-datahub @ <base-url>/artifacts/wheels/acryl_datahub-0.0.0.dev1-py3-none-any.whl' |
acryl-datahub-actions |
0.105 MB | uv pip install 'acryl-datahub-actions @ <base-url>/artifacts/wheels/acryl_datahub_actions-0.0.0.dev1-py3-none-any.whl' |
acryl-datahub-airflow-plugin |
0.108 MB | uv pip install 'acryl-datahub-airflow-plugin @ <base-url>/artifacts/wheels/acryl_datahub_airflow_plugin-0.0.0.dev1-py3-none-any.whl' |
acryl-datahub-dagster-plugin |
0.020 MB | uv pip install 'acryl-datahub-dagster-plugin @ <base-url>/artifacts/wheels/acryl_datahub_dagster_plugin-0.0.0.dev1-py3-none-any.whl' |
acryl-datahub-gx-plugin |
0.011 MB | uv pip install 'acryl-datahub-gx-plugin @ <base-url>/artifacts/wheels/acryl_datahub_gx_plugin-0.0.0.dev1-py3-none-any.whl' |
prefect-datahub |
0.011 MB | uv pip install 'prefect-datahub @ <base-url>/artifacts/wheels/prefect_datahub-0.0.0.dev1-py3-none-any.whl' |