2024.05.23
ChatGPT的火爆,将AI的安全与隐私又一次置于聚光灯下,高智慧AI如果产生对人类的不良意识更将是全新的巨大威胁。安全从业者想了解ChatGPT能否有助于自己的工作,企业的管理者想了解能否使用ChatGPT带来业务优势,并规避风险,甚至大众也有AI时代安全与隐私的顾虑和担忧…… 相信此本白皮书能以专业的视角,清晰和准确的描述ChatGPT的安全影响,给读者信息、思考和启发。
查看详细
2024.04.19
This standard document provides a framework for testing and validating the security of Generative AI applications. The framework covers key areas across the AI application lifecycle, including Base Model Selection, Embedding and Vector Database in the Retrieve Augment Generation design patterns, Prompt Execution/Inference, Agentic Behaviors, Fine-Tuning, Response Handling, and AI Application Runtime Security. The primary objective is to ensure AI applications behave securely and according to their intended design throughout their lifecycle. By providing a set of testing and validation standards and guidelines for each layer of the AI Application Stack, focusing on security and compliance, this document aims to assist developers and organizations in enhancing the security and reliability of their AI applications built using LLMs, mitigating potential security risks, improving overall quality, and promoting responsible development and deployment of AI technologies. AI STR program represents a paradigm shift in how we approach the development and deployment of AI technologies. Championing safety, trust, and responsibility in AI systems, lays the groundwork for a more ethical, secure, and equitable digital future, where AI technologies serve as enablers of progress rather than as sources of uncertainty and harm. Generative AI Application Security Testing and Validation Standard is one of the AI STR standards.
查看详细
2024.04.19
This standard document provides a framework for evaluating the resilience of large language models (LLMs) against adversarial attacks. The framework applies to the testing and validation of LLMs across various attack classifications, including L1 Random, L2 Blind-Box, L3 Black-Box, and L4 White-Box. Key metrics used to assess the effectiveness of these attacks include the Attack Success Rate (R) and Decline Rate (D). The document outlines a diverse range of attack methodologies, such as instruction hijacking and prompt masking, to comprehensively test the LLMs' resistance to different types of adversarial techniques. The testing procedure detailed in this standard document aims to establish a structured approach for evaluating the robustness of LLMs against adversarial attacks, enabling developers and organizations to identify and mitigate potential vulnerabilities, and ultimately improve the security and reliability of AI systems built using LLMs. By establishing the "Large Language Model Security Testing Method," WDTA seeks to lead the way in creating a digital ecosystem where AI systems are not only advanced but also secure and ethically aligned. It symbolizes our dedication to a future where digital technologies are developed with a keen sense of their societal implications and are leveraged for the greater benefit of all.
查看详细
2024.01.29
通过对AI安全风险进行深入分析和研究,我们坚信企业和组织可以采取更为有效的管理措施,确保数据安全与隐私保护。面对数字经济时代,企业和组织必须高度重视人工智能安全风险管理,并积极应对各类人工智能安全挑战。借助《AI安全白皮书》的指导和建议,我们期待广大从业者能够更好地把握人工智能安全管理要点,确保数字经济发展可持续性。同时,我们也期待更多企业和组织能够参与到人工智能安全事业中来,共同为构建安全、健康的数字经济环境贡献力量。
查看详细
首页< 上一页1下一页 >末页
本网站使用Cookies以使您获得最佳的体验。为了继续浏览本网站,您需同意我们对Cookies的使用。想要了解更多有关于Cookies的信息,或不希望当您使用网站时出现cookies,请阅读我们的Cookies声明隐私声明
全 部 接 受
拒 绝