The article describes how to use abstract syntax trees (ASTs), hash maps, and other computer science concepts to detect duplicated code in software projects. It walks through building a practical duplicate code finder using techniques typically taught in CS courses, showing how theoretical knowledge can solve real-world programming problems.
#code-quality
18 items
The article analyzes code quality across 24 popular open source projects, comparing heritage projects with AI-generated code. It examines metrics like maintainability, security, and documentation to assess differences in software development approaches.
The article discusses technical debt in AI systems, noting that while code may be free to write, the accumulated technical debt from rapid development and poor practices can become costly. It highlights how technical debt manifests in AI projects through issues like data quality problems, model drift, and infrastructure complexity.
Octokraft is a technical debt management platform that helps teams maintain code health by validating patterns, consistency, and security across repositories. It tracks code friction, detects architecture drift, enforces team conventions, and measures test quality beyond coverage. The platform is free for one project and already analyzes numerous open source projects.
Vibe Guard is a set of three Claude Code skills that audit AI-generated code before pushing to repositories. The skills help identify potential issues and ensure code quality through automated review processes.
An analysis of 103,000 AI-generated GitHub repositories found only about 1% were production-ready. The study examined repositories created using AI coding tools like GitHub Copilot and ChatGPT. Most AI-generated code required significant human intervention to become usable.
A study found that AI code review tools fail to detect security vulnerabilities in AI-generated code. The research shows these tools miss critical security issues that human reviewers would typically catch. This raises concerns about relying solely on AI for code security assessments.
AI agents can help maintain code quality by automating tasks like code reviews, testing, and documentation. These tools analyze code patterns and provide suggestions to improve consistency and reduce bugs. Implementing AI agents requires careful integration with existing development workflows.
Badvibes is a linting tool designed for Vibe Coders that helps identify and fix code issues. The package provides automated code quality checks to maintain coding standards and improve development workflows.
The article argues that well-designed software should not require double-checking by users. It suggests that when software forces users to verify its work, it indicates a design flaw that undermines trust and efficiency.
The article presents a practitioner's perspective on program analysis, discussing practical applications and real-world implementation considerations. It explores how program analysis techniques are used in software development tools and engineering workflows.
The Log4J vulnerability highlighted how dependencies can introduce significant security risks. Developers often import packages to save writing minimal code, adding thousands of lines of untested external code. The author proposes minimizing dependencies and requiring full justifications for any new package additions.
Big tech companies produce sloppy code because engineers often work outside their expertise due to short tenures and frequent team changes. Most code changes are made by relative beginners unfamiliar with codebases, while experienced engineers are overloaded. Companies prioritize flexibility over long-term expertise, accepting bad code as a tradeoff.
The article discusses a different type of technical debt that developers face, contrasting it with more commonly recognized forms of technical debt in software development.
The article discusses how developers should use clean code principles as guidance but ultimately move beyond them when necessary for practical solutions.
The article discusses the concept of WET (Write Everything Twice) codebases, contrasting them with DRY (Don't Repeat Yourself) principles. It explores how excessive abstraction can lead to complexity while some duplication may improve code maintainability and clarity.
The article discusses linting practices, specifically addressing suppressions of suppressions in code analysis tools. It explores technical approaches to managing multiple layers of code warnings and exceptions.
The article discusses various problematic Python coding patterns that can lead to bugs and errors in software development. It examines common pitfalls and anti-patterns that developers should avoid when writing Python code.