This is a collection of prompts I’ve written for projects in various languages that I work with on a regular basis.
Setup
There are some tools that I use with these prompts that you may want to add to your system by running the following commands:
bashutils_url="https://raw.githubusercontent.com/l50/dotfiles/main/bashutils"
bashutils_path="/tmp/bashutils"
if [[ ! -f "${bashutils_path}" ]]; then
curl -s "${bashutils_url}" -o "${bashutils_path}"
fi
source "${bashutils_path}"
Next, navigate to the directory of the project you want to work on and run the following command:
process_files_from_config $bashutils_path/files/file_patterns.conf
Using patterns specified in the file_patterns.conf
file, this function
will process files recursively from the current directory and list them
in a format that lends itself to parsing and comprehension by
ChatGPT. Lastly, it will output the final results
to your clipboard using pbcopy
for macOS or xclip
for Linux.
Note: if you reboot your system, you’ll need to run this script again
unless you update the $bashutils_path
to a permanent location on disk.
Ansible Prompts
These prompts help fix linting violations, create molecule tests, and handle other common tasks in Ansible.
Ansible Role Development Assistant
Purpose
This prompt helps you develop, test, and maintain Ansible roles by providing automated assistance for common tasks including linting fixes, role refactoring, and molecule test creation.
Input Format
Task Type: [lint|refactor|molecule]
Role Name: [your-role-name]
# For lint fixes
Linting Output: |
[Paste complete ansible-lint output here]
# For refactoring/molecule tests
Role Structure: |
[Paste complete role directory structure and contents]
Additional Requirements: |
[Any specific requirements or constraints]
Task Specifications
Lint Fixes
I will:
- Analyze each linting violation
- Provide corrected code snippets
- Explain the fixes and best practices
- Include preventive measures for future violations
Role Refactoring
I will refactor the role to:
- Follow Ansible best practices
- Implement proper variable handling
- Optimize task organization
- Include proper metadata
- Structure the role as:
role_name/
├── defaults/
│ └── main.yml
├── tasks/
│ └── main.yml
├── vars/
│ └── main.yml
├── meta/
│ └── main.yml
├── templates/
│ └── *.j2
└── molecule/
└── default/
├── molecule.yml
├── converge.yml
└── verify.yml
Molecule Tests
I will create:
- Comprehensive
verify.yml
with:- Variable validation
- Service state checks
- File/directory verification
- Configuration testing
- Idempotency checks
- Platform-specific tests when needed
- Custom test scenarios if required
Response Format
Task: [Selected task type]
Role: [Role name]
Analysis: [Detailed analysis of current state]
Changes: [List of changes/improvements]
Implementation: [Complete code snippets and configurations]
Recommendations: [Additional best practices and suggestions]
Testing: [Testing procedures and validation steps]
Example Usage
Task Type: lint
Role Name: k3s_agent
Linting Output: |
- name: Deploy the K3s http_proxy configuration file
ansible.builtin.template:
src: "http_proxy.conf.j2"
dest: "{{ systemd_dir }}/k3s.service.d/http_proxy.conf"
owner: root
group: root
mode: "0755"
# [WARNING]: File permissions unset or incorrect
Role Structure: |
k3s_agent/
├── tasks/
│ ├── main.yml
│ └── http_proxy.yml
├── templates/
│ ├── http_proxy.conf.j2
│ └── k3s.service.j2
└── defaults/
└── main.yml
Additional Requirements: |
- Must maintain RHEL/Debian compatibility
- Requires proxy support
- Should include molecule tests
Notes
- Provide complete code snippets, not partial solutions
- Include explanations for all changes
- Follow Ansible best practices and style guides
- Consider security implications
- Ensure backward compatibility
- Include error handling where appropriate
Constraints
- All code must be idempotent
- Must work with Ansible 2.9+
- Should support major Linux distributions
- Must include proper documentation
- Should follow YAML best practices
Git Prompts
These prompts assist in creating commit messages from diffs and handling other git-related tasks.
Create commit message from git diff
Prompt:
Write a commit message for me using this diff:
```bash
# Placeholder for git diff - replace with your git diff output
# End of Placeholder
```
Please adhere to this format for your commit:
Created new `asdf` role with tests and linting.
**Added:**
- Added automated documentation generation for magefile utilities
- Automated Release Playbook - Introduced `galaxy-deploy.yml`, an automated
release playbook for publishing the collection to Ansible Galaxy.
- Molecule Workflow - Added a new GitHub Actions workflow `molecule.yaml` for
running Molecule tests on pull requests and pushes.
- Renovate Bot Configuration - Updated Renovate Bot configurations to reflect
the new repository structure and naming.
- `molecule` configuration - Added new `molecule` configuration for the `asdf`
role to support local testing and verification.
- asdf role - Added a new `asdf` role with enhanced functionality including
OS-specific setup. Updated metadata and created new documentation under
`roles/asdf/README.md` detailing role usage and variables.
**Changed:**
- GitHub Actions Workflows - Refactored the `release.yaml` workflow to align
with Ansible collection standards, including updating working directory
paths, setting up Python, installing dependencies, and automating the release
to Ansible Galaxy.
- Pre-commit hooks - Added new pre-commit hooks for shell script validation and
formatting.
- Refactored Ansible linting configuration - Moved the `.ansible-lint`
configuration to `.ansible-lint.yaml` and adjusted linting rules.
Also, added `mdstyle.rb` and `.mdlrc` for markdown linting configurations.
- Repository Metadata - Updated repository links in `README.md` and
`galaxy.yml` to reflect the new repository naming and structure.
- Upgrade dependencies - Upgraded versions of pre-commit hooks and dependencies
in `.pre-commit-config.yaml`, updated mage's `go.sum` to reflect the new
dependency tree, and removed unused dependencies from mage's `go.sum`.
**Removed:**
- Removed old files in preparation for later refactoring.
- Windows Support for asdf role - Removed Windows support
from `roles/asdf/README.md` as it is not supported in the tasks.
Keep your answer to 80 characters max per line!
Create PR description from git diff
Prompt:
Write a PR description for me using this diff:
```bash
# Placeholder for git diff - replace with your git diff output
# End of Placeholder
```
Please adhere to this format for your PR description:
Dynamically populate devices config
**Key Changes:**
- Refactored `FetchDeviceIntegrations` to dynamically pull device details.
- Updated `config.yaml` to remove static device IDs for shades.
- Introduced a `Taskfile.yaml` for device query automation.
- Added `UpdateConfigWithDevices` to auto-populate device commands in config.
**Added:**
- `Taskfile.yaml` to query and retrieve device data.
- `UpdateConfigWithDevices` function to dynamically update shade configurations.
- New `troy` package to handle TRO.Y-specific configurations and device parsing.
**Changed:**
- Replaced static shade configurations in `config.yaml` with dynamic fetching.
- Modified `FetchDeviceIntegrations` to use dynamic device IDs.
- Updated `CreateRequest` and `ExecuteOperation` to use `troy.Config`.
**Removed:**
- Static device entries from `config.yaml`.
- Manual device ID configuration for shades in the viper config.
Keep your answer to 80 characters max per line!
Go Prompts
These prompts assist in writing go code, creating tests, and handling other common tasks in Go.
Generate comments
Prompt:
Update all of the exported resource comments in this code:
```go
# Placeholder for go code - replace with your go code
```
So that it matches this format for struct comments:
```go
// Cloudflare represents information needed to interface
// with the Cloudflare API.
//
// **Attributes:**
//
// CFApiKey: Cloudflare API key.
// CFEmail: Email associated with the Cloudflare account.
// CFZoneID: Zone ID of the domain on Cloudflare.
// Email: Email address for notifications.
// Endpoint: API endpoint for Cloudflare.
// Client: HTTP client for making requests.
```
and this format for exported functions:
```go
// GetDNSRecords retrieves the DNS records from Cloudflare for a
// specified zone ID using the provided Cloudflare credentials.
// It makes a GET request to the Cloudflare API, reads the
// response, and prints the 'name' and 'content' fields of
// each DNS record.
//
// **Parameters:**
//
// cf: A Cloudflare struct containing the necessary credentials
// (email, API key) and the zone ID for which the DNS records
// should be retrieved.
//
// **Returns:**
//
// error: An error if any issue occurs while trying to
// get the DNS records.
```
Please make sure there are no lines that are larger than 80 characters in length.
Adhere to idiomatic go practices (log or fmt output should not start with an
uppercase letter).
Pay close attention to spacing and formatting in these examples - these are
both very important to adhere to!
Generate tests
Prompt:
Write me tests for all exported resources in this go code:
```go
# Placeholder for go code - replace with your go code
```
Using this format:
```go
package some_test
import (
"testing"
)
func TestSomething(t *testing.T) {
tests := []struct{
name string
// input and output go here
}{
{
name: "something"
// input and output go here
},
{
name: "another thing"
// input and output go here
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// call function or method being tested
// check outcome
})
}
}
```
Please make sure there are no lines that are larger than 80 characters in length.
Adhere to idiomatic go practices (log or fmt output should not start with an
uppercase letter).
Pay close attention to spacing and formatting in these examples - these are
both very important to adhere to!
Fix Go Test Errors
You are a Go programming expert tasked with fixing errors in a Go test file. Your goal is to analyze the test output, the test code, and the code under test, then provide a detailed explanation of the errors and suggest specific fixes.
First, review the following information:
Code under test: <code_under_test> {{GO_CODE_UNDER_TEST}} </code_under_test>
Go test code: <go_test_code> {{GO_TEST_CODE}} </go_test_code>
Go test output: <go_test_output> {{GO_TEST_OUTPUT}} </go_test_output>
Now, follow these steps to analyze and fix the errors:
- Carefully examine the test output, test code, and code under test.
- Identify all errors reported in the test output.
- For each error, determine its cause by cross-referencing the test code and code under test.
- Develop specific, actionable fixes for each error.
- Create a corrected version of the Go test code.
- Explain the changes made and why they resolve the errors.
Before providing your final response, wrap your analysis process in <error_analysis_process> tags. In this section:
- List each error you’ve identified from the test output
- For each error: a) Quote relevant parts of the test code and code under test b) Analyze how the quoted parts relate to the error c) Brainstorm potential fixes, listing pros and cons for each d) Choose the best fix and explain why it’s the optimal solution
After your analysis, structure your final response using the following XML tags:
<error_analysis> Provide a detailed explanation of each error, including:
- The line number where the error occurs
- The nature of the problem
- How the error relates to the test code and code under test </error_analysis>
<suggested_fixes> For each error:
- Describe the specific fix you recommend
- Explain why this fix will resolve the error
- Discuss any potential side effects or considerations </suggested_fixes>
<corrected_code> Provide the complete, corrected Go test code. Clearly mark your changes using comments or by wrapping modified lines in tags. </corrected_code>
Remember:
- Be thorough in your analysis and clear in your explanations.
- Provide specific, actionable fixes for each error.
- Ensure that your corrected code is complete and runnable.
- Explain not just what to change, but why the changes fix the errors.
Here’s an example of how your response should be structured (using generic content):
<error_analysis> Error 1: Line 15 - Undefined variable ‘result’ This error occurs because the variable ‘result’ is used before it’s declared… </error_analysis>
<suggested_fixes> For Error 1: Declare the ‘result’ variable before using it. Add the following line before line 15: var result int This fix will resolve the error by… </suggested_fixes>
<corrected_code> package main
import ( “testing” )
func TestExample(t *testing.T) {
var result int result = SomeFunction() if result != expectedValue { t.Errorf(“Expected %d, got %d”, expectedValue, result) } } </corrected_code>
Your goal is to provide a comprehensive analysis and solution that will effectively resolve all errors in the Go test file.