As you embark on the adoption of AI in your instructional and/or research activities including AI testing, proof of concepts, and similar endeavors, we strongly recommend adhering to these guiding principles to ensure a secure and responsible AI experience.
Protect Sensitive Data
Avoid entering confidential, personal, or institutional information into AI tools, as they lack privacy safeguards and may expose such data to unauthorized parties. Treat the information you enter in ChatGPT or other AI prompts as if it were posted on a public site, like a social network or blog. Refrain from using ChatGPT or other similar generative AI models to generate or process any sensitive or confidential information of level 2 and above as defined by the University’s Data Classification Policy. This includes personal identification details, passwords, financial data, or sensitive research data.
Verify AI-Generated Content
Always review and verify AI-generated content for accuracy and avoid relying on it as an authoritative source. Remember that you bear responsibility for the information you disseminate.
Connect with us before purchasing any generative AI tools
ATC & Bentley IT are actively engaged in identifying and implementing secure environments for using generative AI tools. If you have purchased, or are thinking of purchasing any AI tools, please contact the ATC (atc@bentley.edu) or the helpdesk (helpdesk@bentley.edu).
Guard Against Cyber Threats
Stay vigilant against cyber threats, as AI has made it easier for cybercriminals to craft realistic phishing messages. If you encounter any suspicious activities, please promptly report them to our dedicated email address: phishbowl@bentley.edu.