新的人类研究揭示了 AI 黑匣子
为什么大型语言模型的行为方式?新的研究提供了一些线索.
尽管它们是由人类创造的 , 但大型语言模型仍然相当神秘.为我们当前的人工智能热潮提供动力的高辛烷值算法具有一种做事方式 , 这种方式对于观察它们的人来说是无法从表面上解释的.这就是为什么人工智能在很大程度上被称为 “黑匣子 ” , 这种现象不容易理解。.
Related Content
AI Algorithms Can Be Converted Into 'Sleeper Cell' Backdoors, Research ShowsGoogle, Microsoft Offer Measly $10 Million to Protect the World from AILike It or Not, Your Doctor Will Use AI | AI Unlocked Newly published research from Anthropic, one of the top companies in the AI industry, attempts to shed some light on the more confounding aspects of AI’s algorithmic behavior. On Tuesday, Anthropic published a research paper designed to explain why its AI chatbot, Claude, chooses to generate content about certain subjects over others.
Related Content
AI Algorithms Can Be Converted Into 'Sleeper Cell' Backdoors, Research ShowsGoogle, Microsoft Offer Measly $10 Million to Protect the World from AILike It or Not, Your Doctor Will Use AI | AI Unlocked AI systems are set up in a rough approximation of the human brain—layered neural networks that intake and process information and then make “decisions” or predictions based on that information. Such systems are “trained” on large subsets of data, which allows them to make algorithmic connections. When AI systems output data based on their training, however, human observers don’t always know how the algorithm arrived at that output.
这个谜团引发了人工智能 “解释 ” 领域 , 研究人员试图追踪机器决策的路径 , 以便他们能够理解其输出.在人工智能解释领域 , “特征 ” 指的是神经网络中激活的“ 神经元 ” 的模式 - 实际上是算法可以引用的概念.研究人员可以理解的神经网络中的 “特征 ” 越多 , 他们就越能理解某些输入如何触发网络影响某些输出.
在有关其发现的备忘录中 , 人类研究人员解释了他们如何使用称为 “字典学习 ” 的过程来破译克劳德神经网络的哪些部分映射到特定概念.研究人员说 , 使用这种方法 , 他们能够 “通过查看哪些特征响应特定输入来开始理解模型行为 ,从而让我们深入了解该模型的 “推理 ” , 了解它是如何得出给定的响应的.”
Disclaimer: The views in this article are from the original author and do not represent the views or position of Hawk Insight. The content of the article is for reference, communication and learning only, and does not constitute investment advice. If it involves copyright issues, please contact us for deletion.