Booth AI System Best Practices

Summary

Information security is critical in the development and deployment of AI systems. Here are some best practices for ensuring the security of AI systems

Body

Data Security

Data Encryption: Encrypt sensitive data both at rest (in storage) and in transit to prevent unauthorized access.

Access Control: Implement strict access controls to ensure that only authorized personnel can access sensitive data.

Anonymization and Masking: Use techniques like anonymization and masking* to protect personal data used in AI training.  This may be required with specific sensitive datasets (such as HIPPA, PII, FERPA, Financial Data, etc.).

*For details on sensitive data point masking reach out to Information Security.

Model Security

Secure Training Environments: Ensure that the environments used for training AI models are secure and isolated from untrusted networks.  If needed, Booth Information Technology can help isolate areas in accordance with security and privacy needs.

Regular Audits: Conduct regular security audits of the AI models and the infrastructure supporting them.  Information Security can assist in putting good auditing practices in place.

Adversarial Robustness: Implement defenses against adversarial attacks, such as network segmentation, maintenance regimens, monitoring, and anomaly detection.

Operational Security

Monitoring and Logging: Continuously monitor and log AI system activities to detect and respond to security incidents.  Information Technology can assist in additional monitoring to meet needs.

Incident Response Plan: Develop and maintain an incident response plan tailored to AI-specific threats and vulnerabilities.  All AI systems fall under Booth Information Technology’s Incident Response Policy and Plan.

Patch Management: Regularly update and patch AI software and dependencies to protect against known vulnerabilities.  Like any other computer system, patching is critical.

Ethical and Compliance Considerations

Bias and Fairness: Implement measures to detect and mitigate biases in AI models to ensure fair and ethical outcomes.

Compliance: Ensure AI systems comply with relevant laws and regulations, such as GDPR for data protection.  Make sure all use of all research data is approved by URA and/or Booth Data Governance.

Transparency and Explainability: Develop AI systems that can provide explanations for their decisions, enhancing transparency and trust.

User Awareness and Training

Security Training: Complete regular security training for all personnel involved in the development, deployment, and maintenance of AI systems.  Some AI access/use may require specific security training.

Awareness Programs: Participate in Booth’s awareness programs to educate our teams about the potential security risks associated with AI and how to mitigate them.  If you would like additional instruction, Information Security and Data Governance can assist.

Supply Chain Security

Vendor Assessment: Use the Booth 3rd Party Risk Management process to assess the security practices of third-party vendors and service providers involved in the AI supply chain.  Booth Information Security can assist you in performing a 3rd party risk assessment.

Secure Development Lifecycle (SDLC): Integrate security practices throughout the AI development lifecycle, from design to deployment and maintenance.  Booth Information Security can provide you with guidelines, including OWASP Top 10 for LLM’s.

Secure Deployment

Environment Hardening: Harden the deployment environment to reduce the attack surface, such as securing APIs and using firewalls.  Booth Information Technology may be required to increase security around some AI platforms and specific data.  This may mean access can be slightly more challenging and complex.  This is due to the increased risk of the tool and data.

Continuous Integration/Continuous Deployment (CI/CD): Implement secure CI/CD pipelines to automate and secure the deployment of AI models and updates.

Collaboration and Sharing

Information Sharing: Collaborate with industry peers and share information about emerging threats and best practices.

Research and Development: Investments are constantly in place by Booth Information Technology to stay ahead of evolving threats and to develop new security solutions for AI systems.  Help IT by engaging in these efforts.

Risk Management

Risk Assessment: Regularly perform risk assessments to identify and mitigate potential security risks associated with AI systems.

Risk Identification: Be sure to read and understand all identified risks involving the use of AI tools before working with any.  See the Risks of Using AI Tools at Booth document published by IT Security.

 

Resilience and Redundancy

Backup and Recovery: Work with Booth Information Technology to implement robust backup and recovery procedures to ensure data and model integrity in the event of a security breach.

Redundancy: Design AI systems with redundancy to maintain functionality and performance in the face of attacks or failures.

 

Implementing these best practices can significantly enhance the security of AI systems, protecting them from a wide range of threats and ensuring their reliability and trustworthiness.

 

 

Risks of Using AI Tools at Booth

Using AI tools introduces several information security risks that Booth must address to ensure the integrity, confidentiality, and availability of their systems and data. All Booth staff, faculty, or researchers should read and understand these risks before using any AI/ML/LLM tools. Here are some of the key information security risks associated with AI tools:

Adversarial Attacks

Evasion Attacks: Attackers can manipulate input data to cause AI models to make incorrect predictions or classifications without being detected.

Poisoning Attacks: Malicious actors can inject false data into the training set, compromising the model's performance and reliability.

Prompt Injection Attacks: GenAI users can issue prompts that reveal a chatbot’s training data or operating instructions.

Data Privacy and Confidentiality

Data Leakage: AI models trained on sensitive data might inadvertently reveal confidential information, either through model inversion attacks or by overfitting to the training data.  This is especially risky with the wide use of tools like ChatGPT.  All AI users must be aware of where they are putting Booth data and to make sure they are in alignment with Booth Data Governance requirements.

Insider Threats: Unauthorized personnel might gain access to sensitive training data or model outputs, leading to data breaches.

Model Theft and Intellectual Property Risks

Model Extraction: Attackers can reverse-engineer AI models through query-based attacks, effectively stealing the intellectual property embedded in the model.

Model Inversion: Attackers can reconstruct sensitive input data from the model’s outputs, risking exposure of proprietary or personal information.

Bias and Fairness Issues

Discrimination: AI models might perpetuate or amplify biases present in the training data, leading to unfair or discriminatory outcomes.

Reputation Damage: Biased AI decisions can harm an organization's reputation and lead to legal challenges.

Security of AI Development and Deployment Infrastructure

Vulnerabilities in AI Software: Bugs and vulnerabilities in AI software or frameworks can be exploited to gain unauthorized access or cause denial of service.

Insecure APIs: AI models exposed via APIs can be attacked if proper security measures are not in place, such as rate limiting and authentication.  Booth policy requires all API’s to be properly constructed and secured.  Public facing API’s have additional security requirements that must be reviewed by Booth IT Security.

Dark Patterns in Instructional and Tutorial Materials: Popular AI and Machine Learning tutorials often include steps for users to upload models to their services upon the completion of a script.

Misuse and Abuse

Automation of Attacks: AI tools can be misused to automate and enhance the scale of cyberattacks, such as automated phishing or deepfake generation.

Malicious AI Agents: AI tools themselves can be compromised and used to perform malicious activities, intentionally or unintentionally.

Regulatory and Compliance Risks

Non-Compliance: Failure to comply with data protection laws (e.g., GDPR) when using AI tools can result in legal penalties and fines.  Any AI use of PHI, FERPA, PCI-DSS, and PII that falls under an IRB are examples of Booth uses that will need additional approvals and security risk assessments.

Ethical Violations: Misuse of AI can lead to ethical breaches, impacting user trust and organizational integrity.  This also adds to the above risk to Booth’s reputation.

Operational Risks

Reliability and Robustness: AI systems might fail or behave unpredictably under certain conditions, impacting business operations.  This is a new technology, the reliability has not yet been fully established.  This should be considered when using AI for critical tasks.

Dependency on AI: Over-reliance on AI tools without adequate fallback mechanisms can lead to significant operational disruptions if the AI system fails.

Supply Chain Risks

Third-Party Risks: Vulnerabilities in third-party AI tools or libraries can introduce security risks to the organization’s AI systems.

Component Compromise: Compromise of components in the AI supply chain, such as data sources or pre-trained models, can propagate vulnerabilities.

Transparency and Explainability

Black Box Models: Lack of transparency in AI decision-making processes can hinder the identification and mitigation of security risks.

Accountability: Difficulty in tracing decision paths in AI models can complicate accountability and remediation efforts in case of security incidents.

 

Addressing these risks requires a comprehensive security strategy that encompasses robust data protection measures, secure development practices, continuous monitoring, and incident response planning. By proactively managing these risks, organizations can harness the benefits of AI while safeguarding against potential security threats.  Each person that uses an AI tool is responsible to take part in the risk management and reporting process.

Details

Details

Article ID: 13273
Created
Mon 6/17/24 1:40 PM
Modified
Mon 6/17/24 1:40 PM