BotPM.ai is an AI-assisted enterprise project-management system. This page summarizes how the AI features are intended to be used, their limitations, and the safeguards we apply. It complements our AI Disclaimer and is intended to support disclosure obligations under the EU AI Act and similar frameworks.
1. System purpose
BotPM.ai assists project management organizations (PMOs) with charter authoring, scheduling, earned-value analysis, RAID logs, governance workflows, status communications, and project closeout. It is not designed for safety-critical decisions, biometric identification, employment screening, credit scoring, law enforcement, or any other use case listed as "prohibited" or "high-risk" in Annexes II or III of the EU AI Act.
2. Intended users
Project managers, programme managers, PMO leaders, and approvers operating within an organization that has accepted the BotPM.ai Terms of Service. End users must be at least 18 and must have appropriate authority to use AI-assisted drafting and execution within their organization's policies.
3. How the system works
BotPM.ai uses third-party large language models (currently OpenAI and Anthropic — see our Sub-processors) together with our own retrieval, prompting, and policy layers. Outputs are produced by combining your project content, your organization's configuration, and the selected autonomy mode (Assist, Operate, or Lead).
4. Training-data summary
We do not train foundation models on customer content. The third-party models we call were trained by their providers on broad public and licensed data. Our own retrieval and policy layers are tuned on synthetic project-management examples and on de-identified internal data with explicit consent.
Customer Personal Data and confidential project content are not used to train any model. Our agreements with model providers include "no training on inputs" commitments where available.
5. Known limitations
- AI outputs may be incomplete, outdated, biased, or factually wrong ("hallucinations").
- Numeric calculations (e.g., EVM metrics, schedule math) are computed by deterministic engines wherever possible, but AI-generated narrative around them must still be reviewed.
- Quality depends on the completeness and accuracy of the project content you provide.
- The system reflects the language, conventions, and biases present in its underlying training data.
6. Human oversight
- Assist mode provides recommendations only; humans take every action.
- Operate mode executes routine, low-risk tasks within explicit org policy.
- Lead mode can propose strategic actions, but high-impact or policy-gated changes still require an identified human approver.
- Every AI action is logged for review, and approvers can revoke or revert outcomes.
7. AI content marking
Outputs that are substantially generated by AI are clearly labelled inside the product (for example with an "AI draft" tag). Where outputs are sent to third parties (status reports, stakeholder emails) the originating customer chooses whether to include the label.
8. Contact for AI-related concerns
To report a harm, bias, or accuracy concern relating to BotPM.ai's AI features, email hello@botpm.ai. We acknowledge AI safety reports within 5 business days.