The article contends that while anti-AI sentiment often comes from left-wing institutions, many anti-AI arguments align with traditional conservative positions. These include copyright protection concerns, preserving jobs from technological disruption, and arguments about art's human essence.
seangoedecke-com
30 items from seangoedecke-com
The article argues that only three types of AI products currently work effectively: chatbots like ChatGPT, completion-based products like GitHub Copilot, and agentic products like coding agents. It suggests AI-generated feeds and AI-based video games are promising but not yet successful product categories.
Evaluating new AI models takes months because standard benchmarks are unreliable and often gamed by companies. Real-world testing requires significant time and effort, while subjective "vibe checks" provide limited insight. This makes it difficult to determine if AI progress is stagnating or if models are genuinely improving.
The article provides strategies for software engineers to avoid work blockers, including working on multiple tasks, sequencing work to minimize blockers, maintaining reliable developer tooling, debugging outside one's area, building relationships with other teams, and leveraging senior managers for support.
Big tech companies produce sloppy code because engineers often work outside their expertise due to short tenures and frequent team changes. Most code changes are made by relative beginners unfamiliar with codebases, while experienced engineers are overloaded. Companies prioritize flexibility over long-term expertise, accepting bad code as a tradeoff.
AI detection tools cannot prove that text is AI-generated, as language models produce text similar to human writing. These tools can only make educated guesses with limited accuracy, and false positives can cause significant social harm. The billion-dollar AI detection industry often overstates the reliability of these tools.
Large software products become extremely complex as companies add features like self-hosting and enterprise controls. This complexity makes basic questions about what the software does difficult to answer, often requiring investigation. Engineers gain institutional power because they can reliably answer questions about how the software works.
Only engineers actively working on a large software system can meaningfully participate in its design, as effective design requires intimate knowledge of concrete codebase details. Generic software design advice is typically useless for practical problems in existing systems, though it can help with new projects or tie-breaking decisions.
The author argues that software engineers should maintain some cynicism to better understand how large organizations operate. He suggests that pragmatic engagement with organizational politics enables meaningful impact, while idealistic purity often masks deeper cynicism about corporate motives.
xAI's Grok image model is being widely used to generate nonconsensual lewd images of women on Twitter. Users are prompting the AI to create sexualized versions of women's photos, resulting in public harassment. While Grok refuses to generate nude images, it still produces obscene content that enables mass sexual harassment.
In 2025, the author published 141 blog posts, with 33 reaching the front page of Hacker News. The blog peaked at 1.3 million monthly views in August and gained over 2,500 email subscribers. The author was the third most popular blogger on Hacker News for the year.
The article discusses "The Dictator's Handbook," which presents a political theory where leaders maintain power through coalitions. It explores how this theory might apply to tech companies, noting that while coalition politics may dominate at the top levels, technical competence becomes more critical for success at middle management levels in engineering organizations.
Cryptocurrency coins $GAS and $RALPH have been created using the Bags platform, nominating AI developers Steve Yegge and Geoff Huntley as beneficiaries. The coins are technically unrelated to the developers' open-source AI projects Gas Town and Ralph Wiggum loop. This represents a new cryptocurrency airdrop tactic targeting open-source AI developers.
The author describes being addicted to being useful, which drives their enjoyment of software engineering despite industry challenges. They compare themselves to Gogol's character Akaky Akaievich, whose dysfunctional traits matched his terrible job. Many software engineers are motivated by internal compulsions like solving puzzles rather than external rewards.
The author argues that accurate software project estimation is impossible because work involves unpredictable unknowns. Instead, estimates serve as political tools for managers, and engineers should work backward from desired timelines to determine feasible technical approaches.
Software engineers must understand how tech companies operate to succeed, regardless of their career goals. This includes knowing organizational politics, project dynamics, and how to navigate company structures. The analogy is that you need to know how to drive the car to reach your destination, whatever that may be.
A study on AI-assisted coding found that participants who used AI didn't complete tasks faster and performed worse on skill retention tests. However, when excluding those who manually retyped AI-generated code instead of copy-pasting, AI users were 25% faster. The research suggests that while AI can speed up work, relying heavily on it for coding reduces learning of specific skills.
The article emphasizes that in tech companies, the main priority should be shipping projects successfully. Getting this core objective right can compensate for many other shortcomings, similar to the Pareto principle where a small number of factors produce most results.
Large tech companies operate through complex systems of processes and incentives that determine outcomes, not individual heroics. While engineers may be compelled to fix inefficiencies, such heroism doesn't benefit companies long-term and can be exploited by managers for short-term gains. The structural inefficiencies of large organizations are simply part of their operational landscape.
The author reflects on lying to a colleague about a workplace mistake a decade ago as an intern. He advises controlling emotional reactions, communicating mistakes matter-of-factly, and accepting that some mistakes are inevitable when taking necessary risks in engineering work.
Anthropic's fast mode offers 2.5x faster token generation using low-batch-size inference with their full Opus 4.6 model. OpenAI's fast mode provides 15x faster speeds using Cerebras chips but with a less capable distilled model called GPT-5.3-Codex-Spark. Both companies have implemented different technical approaches to accelerate LLM inference.
A recent paper found that LLM-authored skills provide no benefit when generated before a task. However, asking an LLM to write skills after completing a task successfully allows it to distill knowledge gained through problem-solving. This approach works because it captures learned insights rather than pre-existing assumptions.
AI models cannot learn continuously after deployment because their weights are frozen. While the mechanics of continuous learning are technically straightforward, ensuring models improve rather than degrade requires careful human supervision. Continuous learning also faces safety concerns like potential backdoor attacks and practical challenges with model upgrades.
The article describes "insider amnesia," where outsiders incorrectly speculate about internal problems at tech companies. People often misattribute causes of issues because they lack insider knowledge of how decisions are made. This applies even to experts who are outsiders to the specific company in question.
The article argues that giving AI systems human-like personalities is essential engineering rather than a marketing ploy. Base models trained on human data require personality frameworks to become useful tools that produce helpful rather than harmful outputs. Human-like personas enable AI to consistently access beneficial parts of its training data while avoiding problematic content.
A software engineer questions whether their profession will exist in ten years due to AI advancements. They analyze how AI agents could replace many engineering roles, potentially leaving only supervisory positions. The author acknowledges the irony that software engineers who automated other jobs now face automation.
The article argues that software engineers in large tech companies need a strong ego to navigate complex codebases, take firm technical positions, and correct incorrect claims. However, they must also balance this with the ability to subordinate their ego to organizational decisions and accept project cancellations or political fallout.
The article argues that writing simple code benefits software engineers' careers more than creating complex systems. Simple code enables faster delivery of features and builds a reputation for reliability, which managers value over technical complexity. While some believe complex code creates job security, effective project delivery outweighs such considerations.
The author reflects on working on unpopular products like Zendesk's app marketplace and GitHub Copilot, noting that individual engineers have limited control over whether users love or hate what they build. They argue that working on disliked products can provide valuable perspective and opportunities for meaningful impact, even when facing negative feedback.
The article explores Peter Naur's concept that programming's primary output is the mental theory of how a system works, not just code. It examines whether AI agents allow developers to skip building these mental models and whether LLMs can construct such theories themselves.