This GitHub repository contains a collection of prompts for generating images with AI models like GPT. The project aims to help users create better images by providing effective prompt examples and techniques.
#prompt-engineering
15 items
OpenAI has released GPT-4o with image generation capabilities, but the model appears to avoid generating images of raccoons holding ham radios. This limitation has sparked discussion about the model's content policies and safety filters.
A developer created a CLI tool and markdown standard to sync AI coding prompts across multiple tools. The system allows writing prompts once in a .md file with YAML frontmatter, then automatically translating them for Cursor, Claude Code, and VS Code. It supports team collaboration through shared git repositories and integrates with Obsidian for additional context.
The article argues that effective use of large language models goes beyond simple prompting techniques. It suggests focusing on understanding model capabilities, systematic approaches, and integration into workflows rather than just crafting prompts.
The article discusses the capabilities and limitations of AI image generation systems in creating specific visual concepts like a dog wearing a hat. It explores how different AI models interpret and execute such prompts, examining the technical challenges involved in generating coherent and accurate images from textual descriptions.
Researchers introduce String Seed of Thought (SSoT), a prompting technique that improves distribution-faithful and diverse text generation. The method uses string seeds to guide language models toward more varied outputs while maintaining fidelity to training data distributions. Experimental results show SSoT outperforms existing prompting approaches across multiple benchmarks.
The article discusses how AI models can produce worse results with the same prompt over time, highlighting issues with model degradation and performance inconsistencies. It examines factors like training data changes, model updates, and environmental variables that affect output quality.
The article discusses various approaches to "think before you build" prompting techniques for AI systems. It explores different methodologies that encourage careful planning and measurement before implementation in AI development processes.
Google released Gemini 3.1 Flash TTS, a new text-to-speech model that can be directed using detailed prompts. The model is available via the Gemini API and can only output audio files. The prompting system allows for detailed voice direction including accent, style, and emotional tone.
Simon Willison describes how he used a short prompt with Claude Code to add support for "beats" content to his blog-to-newsletter tool. The prompt instructed the AI to clone his blog repository for reference, update the newsletter tool to include beats with descriptions, and test the changes. This resulted in a successful pull request that modified the SQL query and added beat type display logic.
A recent paper found that LLM-authored skills provide no benefit when generated before a task. However, asking an LLM to write skills after completing a task successfully allows it to distill knowledge gained through problem-solving. This approach works because it captures learned insights rather than pre-existing assumptions.
The article discusses attempts to jailbreak Claude Haiku 4.5, an AI model. The AI responds by questioning whether the jailbreak attempts are genuinely useful or merely testing its security measures.
Nano Banana is an AI image generation model that supports up to 32,768 input tokens, enabling extensive prompt engineering for highly nuanced image creation.
The article distinguishes between prompt engineering, which involves systematic testing and iteration, and blind prompting, which relies on trial-and-error without structured methodology. It emphasizes that effective prompt engineering requires understanding model behavior through controlled experiments rather than random guessing.
The article argues that prompt engineering is most effective for transactional prompting, where prompts are designed to produce consistent, repeatable outputs for specific tasks. It contrasts this with conversational prompting, which is more exploratory and less predictable in its results.