新闻:Anthropic 将 Claude Code 从每月20美元的"Pro"订阅计划中移除(新用户适用,持续更新)
Anthropic 似乎已从其每月20美元的"Pro"计划中移除了 Claude Code 访问权限。现有 Pro 用户仍可通过 Claude 网页应用使用该功能,而支持文档现在仅提及通过"Max 计划"访问 Claude Code。
Anthropic 似乎已从其每月20美元的"Pro"计划中移除了 Claude Code 访问权限。现有 Pro 用户仍可通过 Claude 网页应用使用该功能,而支持文档现在仅提及通过"Max 计划"访问 Claude Code。
A Twitter user claims that Claude Code can read user secrets if it wanted to, suggesting potential security concerns with the AI assistant's capabilities.
The article discusses AI capabilities and potential reactions to them, exploring how different stakeholders might respond to advancing artificial intelligence technologies. It examines various scenarios and considerations surrounding AI development and deployment.
Claude Code has full shell access capabilities that enterprise security tools like CASBs cannot detect. This creates visibility gaps for organizations trying to monitor AI tool usage across their systems.
Anthropic is testing identity verification for Claude users, requiring government ID and selfie checks. The company aims to build user trust and prevent misuse of its AI assistant. This verification process is currently being tested with select users.
A code leak from Anthropic's Claude AI assistant revealed critical command injection vulnerabilities that could allow attackers to execute arbitrary code. The vulnerabilities were discovered in Claude's code interpreter feature, potentially exposing user data and system resources to exploitation.