Booth AI System Best Practices

Body

Booth Information Security provides the following best practices to help faculty, staff, and researchers securely develop, deploy, and use AI systems. These practices are designed to reduce risk, ensure compliance, and protect Booth data and reputation.

Data Security

·        Data Encryption: Encrypt sensitive data both at rest (storage) and in transit (network communication) to prevent unauthorized access.

·        Access Control: Apply role-based access controls to ensure only authorized personnel can access sensitive datasets and systems.

·        Anonymization and Masking: Use anonymization or masking when working with sensitive datasets (e.g., HIPAA, FERPA, PII, financial data). For details on sensitive data point masking, contact Booth Information Security.

Model Security

·        Secure Training Environments: Ensure training environments are segmented and isolated from untrusted networks. Booth IT can help implement isolation measures as needed.

·        Regular Audits: Conduct security audits of AI models and their supporting infrastructure. Booth Information Security can provide guidance and tools for auditing.

·        Adversarial Robustness: Incorporate defenses against adversarial attacks through segmentation, maintenance regimens, anomaly detection, and monitoring.

Operational Security

·        Monitoring and Logging: Continuously log and monitor AI system activities for anomalies or potential incidents. IT can help provide supplemental monitoring services.

·        Incident Response: All AI systems are subject to Booth’s Incident Response Policy and Plan. Ensure AI-specific threats and scenarios are integrated into response playbooks.

·        Patch Management: Apply timely updates and patches to AI software, frameworks, and dependencies. Treat AI environments as critical systems.

Ethical and Compliance Considerations

·        Bias and Fairness: Assess and mitigate bias in AI models to ensure ethical and equitable outcomes.

·        Compliance: Ensure compliance with applicable laws and Booth policies (e.g., GDPR, FERPA, HIPAA, GLBA). All research data use must be reviewed and approved by URA and/or Booth Data Governance.

·        Transparency and Explainability: Favor models and practices that provide interpretable results to support accountability and trust.

User Awareness and Training

·        Security Training: All personnel working with AI must complete security training. Some AI systems may require additional training before access is granted.

·        Awareness Programs: Participate in Booth IT Security awareness programs on AI risks and secure practices. Information Security and Data Governance can provide additional instruction upon request.

Supply Chain Security

·        Vendor Assessment: Use Booth’s 3rd Party Risk Management process to evaluate all AI vendors, platforms, and service providers.

·        Secure Development Lifecycle (SDLC): Integrate security controls throughout the AI lifecycle, from design through deployment and maintenance. Information Security can provide resources, including the OWASP Top 10 for LLMs.

Secure Deployment

·        Environment Hardening: Secure APIs, enable firewalls, and restrict unnecessary services. Some AI platforms and datasets may require enhanced protections, which could add access complexity.

·        Secure CI/CD: Implement secure CI/CD pipelines to maintain integrity and security during AI system updates and deployments.

Collaboration and Sharing

·        Information Sharing: Share knowledge about AI risks, threats, and mitigation practices with peers and trusted partners.

·        Research and Development: Engage with Booth IT to support research initiatives that address evolving AI security threats.

Risk Management

·        Risk Assessment: Conduct risk assessments prior to AI adoption, development, or deployment. Booth IT Security can support this process.

·        Risk Identification: Review the 'Risks of Using AI Tools at Booth' document before using AI/ML/LLM systems.

Resilience and Redundancy

·        Backup and Recovery: Work with IT to implement secure, tested backup and recovery procedures for AI data and models.

·        Redundancy: Design systems with redundancy to preserve functionality in the event of outages or attacks.

 

Risks of Using AI Tools at Booth

1. Adversarial Attacks

  • Risk: Attackers can trick or poison AI systems.
  • Remediation:
    • Only use approved, trusted data sources.
    • Do not upload unknown or suspicious data into AI tools.
    • Work with Booth IT to set up protected environments for AI training.

2. Data Privacy

  • Risk: AI systems can accidentally leak confidential information.
  • Remediation:
    • Never put sensitive Booth data (student records, financial info, health data) into public AI tools like ChatGPT unless approved.
    • Always check with Booth Data Governance before using sensitive datasets.
    • When in doubt, assume data you type into an AI system may be visible to others.

3. Model Theft / Intellectual Property Risks

  • Risk: Attackers can copy or reconstruct AI models or data.
  • Remediation:
    • Restrict who can access Booth models and datasets.
    • Do not share models outside Booth without approval.
    • Use Booth IT Security’s guidance for securing APIs and limiting queries.

4. Bias & Fairness

  • Risk: AI can give unfair, biased, or discriminatory results.
  • Remediation:
    • Regularly test your AI models for fairness (different groups, demographics, etc.).
    • Flag unexpected or unfair results to Booth IT Security or Data Governance.
    • Use diverse and approved datasets for training when possible.

5. Infrastructure Vulnerabilities

  • Risk: AI software, APIs, or frameworks can be hacked.
  • Remediation:
    • Always patch and update AI software.
    • Secure APIs with authentication and rate limits.
    • Work with Booth IT before exposing any AI service to the public internet.

6. Misuse & Abuse

  • Risk: AI can be misused for cyberattacks (phishing, deepfakes, malicious bots).
  • Remediation:
    • Only use AI for approved research, teaching, or business purposes.
    • Report suspicious or harmful AI use immediately to Booth IT Security.
    • Do not attempt to build or use AI tools for offensive or unethical purposes.

7. Regulatory & Compliance Risks

  • Risk: Breaking laws or policies (e.g., FERPA, HIPAA, GDPR) by mishandling data.
  • Remediation:
    • Always get approval from Booth Information Security and Data Governance before using regulated datasets.
    • Do not use personal or student data in AI tools without clearance.
    • When unsure, ask Booth IT Security and Data Governance before starting.

8. Operational Risks

  • Risk: AI systems can fail, behave unpredictably, or be over-relied on.
  • Remediation:
    • Always double-check AI outputs before relying on them.
    • Have backup (manual) methods in place if AI systems are unavailable.
    • Never use AI as the only decision-maker for critical tasks.

9. Supply Chain Risks

  • Risk: Third-party AI tools or pre-trained models may introduce vulnerabilities.
  • Remediation:
    • Use only approved vendors reviewed by Booth’s 3rd Party Risk Management process.
    • Avoid downloading or using random AI models from the internet.
    • Check with IT before adopting a new AI tool or service.

10. Transparency & Accountability

  • Risk: AI decisions may be a “black box” and difficult to explain.
  • Remediation:
    • Favor models and tools that allow explanation of decisions.
    • Document how your AI system was trained and tested.
    • Be prepared to explain AI results to students, colleagues, or leadership.

All AI users share responsibility for risk management, compliance, and timely reporting of incidents to Booth IT Security.

Details

Details

Article ID: 19765
Created
Tue 9/9/25 6:22 PM
Modified
Tue 9/9/25 6:25 PM