The ManageBac AI features allow schools to manage end user access to large-language models (LLM).
Data Protection
ManageBac and Faria Education Group have prioritized responsible use and safety in the introduction of our Generative Artificial Intelligence feature. To that end, we are pleased to announce the following:
- ManageBac does not share personally identifying information (PII) with third party LLM models
- Third-party LLM models can only access data that has been explicitly selected or entered by end users
- ManageBac does not review user prompts and respective AI responses to train LLM models
- ManageBac's third-party licensing agreement(s) preclude user prompts being used to train LLM models
Data Processing
- ManageBac does not transfer personal or sensitive data across regions in the application or usage of LLM models
- The data retention policy of for the service hosting LLM model(s) is approximately 30 days
Safeguarding
All queries to LLM models are subject to filters that take advantage of AI reasoning capabilities with the purpose of protecting users from inflammatory or harmful content.
For example, if a user enters a prompt that would produce content in violation of the above, ManageBac AI will output a message indicating it cannot fullfill the request:
Only the end user who submitted the prompt receives the above message. There are no additional alerts or notifications submitted to end users or to ManageBac support personnel.
Please note that because the safeguarding filters are also underpinned by an LLM model, the filter's performance is subject to variances associated with the technology.
Prompt Engineering
All AI-enabled features are augmented with additional content in order to produce quality output, and is maintained by specialist teams. ManageBac AI may adjust this prompt engineering, and other associated parameters such as "temperature" at any time, in order to further improve the output.
The only information that is passed to LLM models which impacts their reasoning faculty is the following:
- User prompts that are entered by directly typing or clicking
- System prompts that are pre-determined by prompt engineers
- Parameter values, such as temperature
Logs
All queries submitted by end users to LLM models, and their responses, are logged and only available to trained specialists. They will only be reviewed with the purpose to improve prompt engineering and the user experience.