Booth A.I./Machine Learning Operations Security Policy

Tags itsec AI

AI Security Control Statement

Booth’s AI security controls are designed to mitigate risks, protect sensitive data, and ensure the integrity of AI systems. These controls encompass a comprehensive framework that includes encryption, access controls, anomaly detection, model explainability, secure development practices, and continuous monitoring. By implementing these controls, the aim to address potential vulnerabilities, prevent unauthorized access, enhance transparency, maintain control over data in compliance with contracts and data use agreements, and maintain compliance with relevant regulations and standards. Through regular assessments and updates, Booth strives to adapt our security controls to evolving threats and technologies, fostering a secure environment for the responsible deployment and utilization of AI across Booth.

Purpose

At Chicago Booth, we prioritize the security and integrity of our AI systems to safeguard sensitive information, maintain trust with our stakeholders, and uphold ethical standards. Our AI security policy outlines our commitment to ensuring the robustness, reliability, and confidentiality of our AI technologies.

At Booth, our AI security policy is paramount to safeguarding sensitive data, ensuring the integrity of our systems, and maintaining trust with stakeholders. We employ stringent measures encompassing data protection, risk assessment, transparent model practices, secure development protocols, access control, incident response, compliance, and continuous improvement. Through proactive measures and a commitment to ethical standards, we uphold the confidentiality, reliability, and accountability of our AI technologies, fostering a secure environment for innovation and collaboration.

Data Protection and Privacy

We adhere to strict data protection regulations and guidelines to safeguard the privacy of individuals and the confidentiality of data. We implement encryption, access controls, and anonymization techniques to protect sensitive information processed by our AI systems.

All AI and machine learning tools must abide by data use agreements.  This includes data control and data destruction requirements.  Some data, used in AI model training, may need to be anonymized or masked.

Public AI and machine learning tools (ChatGPT, Microsoft CoPilot) are not approved for use with data with restricted data use agreements.  Only data with unrestricted use, public data, or self-generated data may be used in public facing AI tools.

Risk Assessment and Management

Booth IT Security shall conducts regular risk assessments to identify potential vulnerabilities and threats to our AI infrastructure. Security teams employ proactive measures to mitigate risks, including security audits, penetration testing, and continuous monitoring of system activity.

All AI systems, internal and external, must maintain risk management standards in compliance, at a minimum, with Booth’s Risk Management Policy.

External AI vendors must comply with the Booth Vendor and Third Party Management Policy.

Model Transparency and Explainability

Booth shall promotes transparency and explainability in LL AI models to ensure accountability and trustworthiness. AI developers must provide clear documentation and insights into the functioning of our algorithms, enabling stakeholders to understand model decisions and potential biases.

Secure Development Practices

Booth must follow secure development practices throughout the AI lifecycle, from data collection and model training to deployment and maintenance. Developers must undergo training an, and be familiar with, secure coding practices, and implement code reviews and version control to maintain the integrity of AI applications.

Access Control and Authentication

Enforcement of strict access controls and authentication mechanisms to prevent unauthorized access to our AI systems and data is required on all Booth systems. Access permissions are granted on a need-to-know basis, and multi-factor authentication is employed to verify the identity of users.  Booth used the practice of least privilege.

Vendor controlled external AI systems must have adequate access controls.  User accounts must be maintained and must meet, at a minimum, Booth accepted standards.

Access controls for all AI systems must be in compliance with Booth’s Access Control Policy.

Incident Response and Recovery

AI systems must align with established protocols for incident response and recovery to swiftly address security breaches or system failures. All team members must be trained to respond effectively to security incidents, including timely communication with stakeholders and implementing measures to mitigate the impact.

Compliance and Governance

AI systems must adhere to accepted University relevant regulatory requirements and industry standards governing AI security, including GDPR, HIPAA, and NIST CSF. Our governance framework ensures oversight and accountability in the implementation of security measures across the organization.

Continuous Improvement

Continuous improvement should be included in planning for all AI security practices through regular assessments, feedback mechanisms, and collaboration with security experts and industry peers. Active monitoring for emerging threats and technologies should be done as to adapt security measures accordingly.

Security Awareness Training

Some AI platform users may be required to complete additional security awareness training to address specific risk.  These training requirements will be established by Information Security and Data Governance in accordance with regulation, contractual obligation, University and Booth risk tolerance, and industry best practices.

Conclusion

At Booth, we recognize the importance of AI security in fostering trust, protecting privacy, and mitigating risks. By adhering to this policy and fostering a culture of security awareness, we strive to maintain the highest standards of security in both internal AI ecosystem as well as utilization of public facing offerings.

Print Article

Details

Article ID: 14993
Created
Tue 9/3/24 3:50 PM
Modified
Tue 9/3/24 3:58 PM