Claude、何かを教えて
著者は、LLMが得意とする非決定性とテキスト生成を活かした新しいClaudeのワークフロー「Teach me something」を紹介しています。これは、無意味なネットサーフィンに代わる知的刺激として機能します。
著者は、LLMが得意とする非決定性とテキスト生成を活かした新しいClaudeのワークフロー「Teach me something」を紹介しています。これは、無意味なネットサーフィンに代わる知的刺激として機能します。
A Twitter user claims that Claude Code can read user secrets if it wanted to, suggesting potential security concerns with the AI assistant's capabilities.
Anthropic is testing identity verification for Claude users, requiring government ID and selfie checks. The company aims to build user trust and prevent misuse of its AI assistant. This verification process is currently being tested with select users.
A code leak from Anthropic's Claude AI assistant revealed critical command injection vulnerabilities that could allow attackers to execute arbitrary code. The vulnerabilities were discovered in Claude's code interpreter feature, potentially exposing user data and system resources to exploitation.
Anthropic has introduced a 1 million token context window for its Claude Opus 4.6 and Sonnet 4.6 models, representing a significant technical advancement. The company is offering this increased capacity without additional charges to users.
Researchers trained a 32B Qwen model using GRPO reinforcement learning to optimize credit card rewards. The model achieved a score of 0.51 on held-out tasks, outperforming Opus 4 at 0.41 and GPT-4o at 0.36. The training environment is open source under Apache 2.0 license.