2024.04.19
This standard document provides a framework for testing and validating the security of Generative AI applications. The framework covers key areas across the AI application lifecycle, including Base Model Selection, Embedding and Vector Database in the Retrieve Augment Generation design patterns, Prompt Execution/Inference, Agentic Behaviors, Fine-Tuning, Response Handling, and AI Application Runtime Security. The primary objective is to ensure AI applications behave securely and according to their intended design throughout their lifecycle. By providing a set of testing and validation standards and guidelines for each layer of the AI Application Stack, focusing on security and compliance, this document aims to assist developers and organizations in enhancing the security and reliability of their AI applications built using LLMs, mitigating potential security risks, improving overall quality, and promoting responsible development and deployment of AI technologies. AI STR program represents a paradigm shift in how we approach the development and deployment of AI technologies. Championing safety, trust, and responsibility in AI systems, lays the groundwork for a more ethical, secure, and equitable digital future, where AI technologies serve as enablers of progress rather than as sources of uncertainty and harm. Generative AI Application Security Testing and Validation Standard is one of the AI STR standards.
查看详细
2024.04.19
This standard document provides a framework for evaluating the resilience of large language models (LLMs) against adversarial attacks. The framework applies to the testing and validation of LLMs across various attack classifications, including L1 Random, L2 Blind-Box, L3 Black-Box, and L4 White-Box. Key metrics used to assess the effectiveness of these attacks include the Attack Success Rate (R) and Decline Rate (D). The document outlines a diverse range of attack methodologies, such as instruction hijacking and prompt masking, to comprehensively test the LLMs' resistance to different types of adversarial techniques. The testing procedure detailed in this standard document aims to establish a structured approach for evaluating the robustness of LLMs against adversarial attacks, enabling developers and organizations to identify and mitigate potential vulnerabilities, and ultimately improve the security and reliability of AI systems built using LLMs. By establishing the "Large Language Model Security Testing Method," WDTA seeks to lead the way in creating a digital ecosystem where AI systems are not only advanced but also secure and ethically aligned. It symbolizes our dedication to a future where digital technologies are developed with a keen sense of their societal implications and are leveraged for the greater benefit of all.
查看详细
2024.04.01
云安全联盟(CSA)和SAFECode 致力于提高软件安全成果。2019年8月发布的论文《DevSecOps 的六大支柱》提供了一套高级方法并成功实施了解决方案,其作者使用这些方法快速构建软件并最大限度地减少与安全相关的错误。
查看详细
2024.03.20
报告总体反映出企业对云计算中安全驱动创新的推动作用和积极性普遍较高。它为读者深入理解全球企业的云计算实践、面临的安全与管理难题,以及最新的技术部署动向提供了翔实的第一手资料。
查看详细
2024.03.12
零信任是一个技术无关的指导性框架,将访问控制措施更加靠近受保护资产(保护面)。从身份、访问管理的角度来看,它提供了基于风险的决策授权能力,而不是仅基于单一访问控制方法的二元信任来进行授权访问。
查看详细
2024.02.29
《CSA 数据安全词汇表(CSA Data Security Glossary)》由 CSA 工作组专家编写,CSA 大中华区秘书处组织数据安全工作组专家进行翻译并在此基础上汇总添加了部分词汇。
查看详细
首页< 上一页1234567...20下一页 >末页
本网站使用Cookies以使您获得最佳的体验。为了继续浏览本网站,您需同意我们对Cookies的使用。想要了解更多有关于Cookies的信息,或不希望当您使用网站时出现cookies,请阅读我们的Cookies声明隐私声明
全 部 接 受
拒 绝