Understand
Secrets Management
Every real application needs secrets—database passwords, API keys, OAuth tokens, and service credentials. Hardcoding these values into your code is dangerous and inflexible. NoBS Python solves this by letting you declare what secrets your application needs using type-safe Python code, then managing those secrets securely across all your environments.
How Secrets Work
Secrets in NoBS Python are defined using Pydantic's BaseSettings class. Instead of scattered environment variables or configuration files, you write a Python class that describes exactly what your application expects:
from pydantic import SecretStr
from pydantic_settings import BaseSettings
class SharedSecrets(BaseSettings):
openai_api_key: SecretStr
stripe_secret_key: SecretStr
This isn't just documentation—it's executable configuration. When NoBS Python builds your project, it reads this class definition and knows your application requires two secret values. It creates secure storage for them in each environment (test, production, preview branches) and prompts you to fill in the actual values through the dashboard.
At runtime, these secrets are injected into your application as environment variables. Your code reads them using Pydantic's settings mechanism, which automatically validates types and fails fast if something is missing. The secrets themselves never touch your source code or version control.
Because your configuration is Python instead of YAML or JSON, NoBS Python gains deep insight into your infrastructure needs. It can detect which database drivers you're using, which ML frameworks you've imported, and which cloud services you're connecting to. This enables intelligent provisioning and version matching that would be impossible with static config files.
Declaring Secrets in Your Project
To begin, create a Pydantic settings class that represents the required configuration for your service:
from pydantic import PostgresDns, SecretStr
from pydantic_settings import BaseSettings
class SharedSecrets(BaseSettings):
openai_api_key: SecretStr
psql_url: PostgresDns
You then register this class in your project definition:
project = Project(
name="my-project",
shared_secrets=[ObjectStorageConfig, SharedSecrets],
server=FastAPIApp(app)
)
With that in place, no manual secret handling is needed. The platform will detect the settings and guide you through assigning values for each environment.
Intelligent Version Matching
Because configuration and imports are written in Python, the platform can determine not only which resources are needed, but also which versions to deploy. For example, if the project depends on a specific version of mlflow, the platform can automatically provision an MLflow server that matches that version, avoiding compatibility issues between client and server.
The same applies to systems such as Spark, S3/MinIO object storage SDKs, and any service where the runtime infrastructure must align with the Python package version. If your project imports a Spark dependency or references a particular version in pyproject.toml, the deployed Spark cluster will be version-compatible. This drastically reduces configuration divergence and runtime errors.
Resource Discovery
The platform inspects the project and looks for configuration types that are known to represent infrastructure resources. For example, if a settings class includes types such as RedisDns, PostgresDns, MySQLDns, NatsDns, ClickhouseDns, or MongoDns, the system recognizes that these services may be required. It will then ask whether it should provision those services for you and automatically generate the corresponding connection credentials.
This means you do not need to pre-configure databases, message brokers, or analytics storage. The intent is captured by your Pydantic models, and the platform handles the resource creation and secret generation that follow from that intent.
ML and AI Discovery
There is similar intelligence for applications that rely on machine-learning, AI, and large-scale data features. If your repository contains references to MLflow, Spark, S3 or other object storage tooling, or API-driven platforms such as OpenAI or Anthropic, the platform may suggest provisioning compatible compute environments and creating any needed access tokens.
For example, if an import or configuration parameter implies the use of the OpenAI API, you may be prompted to allow the platform to generate and store an API key for all deployment environments. If Spark jobs are detected, it may propose setting up a cluster with matching versions automatically. Object storage needs can result in new buckets, access policies, and secret assignments without any additional coding.
Secrets Across Environments
Each deployment environment receives its own credentials. Development and preview environments can use temporary or low-privilege credentials, while production receives fully secured values. When changes move through the release pipeline, you may keep, regenerate, or update the secret values depending on the required security posture.
Summary
The secrets system is centered on Pydantic settings models, which serve as the single source of truth for required configuration. From these models, the platform:
- secures values automatically,
- maintains isolation between environments,
- discovers required infrastructure and offers to provision it, and
- creates or manages tokens for AI, ML, object storage, and similar external services.
All of these capabilities allow you to write Python configuration classes while the platform handles the difficult work of managing secrets at scale.