Legal hub

AI Product Terms

Model-specific terms for prompts, outputs, and AI safety expectations.

Effective: May 1, 2026Last updated: May 1, 2026
This legal draft contains required placeholders such as [legal-entity-name] and [support-contact-email]. Replace all [placeholder-type] values with approved legal terms before publication.

1. AI Features and Output

Decalyst uses AI models to generate code, text, plans, and related outputs.

Output is probabilistic and may be incorrect, insecure, or incomplete; human review is required before use in production.

2. Model Routing and Providers

Requests may be processed by third-party providers selected by Decalyst settings, plan features, or your own BYOK configuration.

Model providers may receive prompts, context files, outputs, and metadata needed to perform inference.

3. Improvement and Training Controls

Content use for Decalyst product-improvement workflows depends on account settings and plan controls.

If improvement settings are disabled, Decalyst does not use eligible content for product-improvement training except for security, abuse prevention, or legal obligations.

4. High-Risk and Restricted Uses

  • No fully automated legal, medical, or safety-critical decisions without qualified human oversight.
  • No use for weapons development, unauthorized surveillance, or unlawful profiling.
  • No use that violates applicable AI, privacy, consumer protection, or sector-specific law.

5. Your Responsibilities

  • Review and test all AI-generated output before deployment.
  • Respect third-party rights and licensing requirements.
  • Provide legally required disclosures to your end users when your app uses AI output.

6. Safety and Enforcement

We may block or suspend model access for unsafe or policy-violating behavior, and may retain relevant logs for trust-and-safety review.