Ahh, this super popular topic of vibecoding when everyone will be able to develop anything in matter of minutes and there won’t be any need for developers anymore. Challenge accepted and I ate my own dog food.
The Task
On daily basis send me a notification about all changes in Salesforce metadata to Slack as we want to track what has been changed.
First Iteration: Flow
Well, I’m great admin so flow can be the answer for anything, why not for this? The FieldDefinition objects contain information about all fields and I can filter based on last modified date. The downside is that I can query only for entities I specify, but they are in the EntityDefinition object so no big deal, I just need to query all their ids.
Deleted fields? Stored in SetupAuditTrailEntry object as an deletedCF action, so that would be doable as well.
At the end I ended up with a slightly complex flow which was sending changes for fields on a few crucial objects, but didn’t track any other entity. There must be better approach.
Second Iteration: Bitbucket Pipeline
Actually I have all the metadata stored in my Bitbucket repository already, where I track everything I did in metadata which needs to be deployed. Admins still update everything in production.
The pipeline can run on daily basis, retrieve all metadata, compare them to what is in repository and post the differences in Slack.
That’s where my fun with ChatGPT/Copilot started.
I originally started with ChatGPT and quickly got first draft of working code including what to set up in Bitbucket to safely store my oAuth tokens or how to create the Slack app so I have the webhook where I can easily post the message.
bitbucket-pipelines.yml
The Bitbucket pipeline looks super simple and I can read what it does
image: node:20
pipelines:
schedules:
- cron: "0 6 * * *"
enabled: true
branches:
- master
pipeline: metadata-monitor
custom:
metadata-monitor:
- step:
name: Salesforce Metadata Monitor
script:
- npm install --global @salesforce/cli
- sf --version
- git fetch origin master
- git checkout master
- git reset --hard origin/master
- bash scripts/retrieve.sh
- bash scripts/diff.sh
Runs at 6am but can be run at any time as well (hence the „custom“), originally it run after every commit but that’s something we quickly solved with ChatGPT. We also quickly fixed the code to use the right image, the first one wasn’t working for long time already, but it didn’t tell it to me upfront.
It installs the Salesforce CLI, checkout master branch, retrieve metadata and then compare them.
retrieve.sh
The retrieve.sh is easy to read for me as well
#!/bin/bash
set -e
echo "🔐 Authenticating to Salesforce"
echo "$SF_AUTH_URL" > authfile.txt
sf org login sfdx-url \
--sfdx-url-file authfile.txt \
--alias ci-org \
--set-default
rm authfile.txt
echo "📥 Retrieving metadata"
rm -rf metadata
mkdir -p metadata
sf project retrieve start \
--manifest packageSlackInfo.xml \
--target-org ci-org \
Takes the authentication token from Bitbucket variable and save it to file, login, retrieve metadata specified in packageSlackInfo.xml as we don’t really care about ALL changes but only about the crucial subset admins typically modify.
diff.sh
Here the real fun started and it took a lot of weeks to finish – mostly because I originally waited for the daily runs and just after its failure I asked ChatGPT/Copilot to fix it. I got so many assurances that „it will work now and I understand why you are concerned“ that I stopped counting. Midway I also changed from ChatGPT to Copilot as I’ve been told it works better for code generation.
#!/bin/bash
set -e
# Detect changed files (added, modified, deleted)
CHANGED_FILES=$(git diff --name-status HEAD | cut -f2-)
if [ -z "$CHANGED_FILES" ]; then
echo "ℹ️ No metadata changes"
exit 0
fi
DETAILS=""
# Loop through changed files safely (handles spaces)
while IFS= read -r file; do
STATUS=$(git diff --name-status HEAD -- "$file" | awk '{print $1}')
case "$STATUS" in
A|??) ICON="➕ Added" ;;
D) ICON="❌ Deleted" ;;
M) ICON="✏️ Modified" ;;
*) ICON="🔄 Changed" ;;
esac
# Append with REAL newlines
DETAILS+="📄 $file
$ICON
"
# Only show line-level details for modified XML files
if [[ "$STATUS" == "M" && "$file" == *.xml ]]; then
XML_DIFF=$(git diff --unified=0 -- "$file" \
| grep -E '^\+[[:space:]]*<|^\-[[:space:]]*<' \
| grep -vE '^\+\+\+|^\-\-\-' \
| head -n 20)
if [ -n "$XML_DIFF" ]; then
DETAILS+=" Changes:
"
while IFS= read -r line; do
DETAILS+=" $line
"
done <<< "$XML_DIFF"
fi
fi
done <<< "$CHANGED_FILES"
# Clean control characters and escape XML brackets
CLEAN_DETAILS=$(printf "%s" "$DETAILS" \
| tr -d '\r' \
| tr -d '\000' \
| sed 's/</\</g; s/>/\>/g')
# --- Chunking for Slack (avoid 3000-char block limit) ---
# Write CLEAN_DETAILS to a temp file
printf "%s" "$CLEAN_DETAILS" > details.txt
# Split into 2500-byte chunks (safe margin under Slack's 3000-char limit)
# chunk_000, chunk_001, ...
split -b 2500 -a 3 -d details.txt chunk_
# Build Slack payload in Python
python3 << 'PY' > payload.json
import json, glob
blocks = [{
"type": "section",
"text": {"type": "mrkdwn", "text": "*🚨 Salesforce Metadata Changes Detected*"}
}]
for filename in sorted(glob.glob("chunk_*")):
with open(filename, "r", encoding="utf-8") as f:
text = f.read()
if text.strip():
blocks.append({
"type": "section",
"text": {"type": "mrkdwn", "text": text}
})
print(json.dumps({"blocks": blocks}, ensure_ascii=False))
PY
# Send Slack notification
curl -v -X POST \
-H "Content-Type: application/json" \
--data-binary @payload.json \
"$SLACK_WEBHOOK_URL"
# Commit and push snapshot (commented out for safety; uncomment if you want it active)
git add .
git commit -m "chore(metadata): automated snapshot from Salesforce org" || true
echo "🚀 Pushing changes to master"
git push origin master
I must admit that I would NEVER be able to put this code together on my own, so I don’t really blame the AI. Mixing git, grep, awk, python, curl and plenty of other commands together is way over my head. It works at the end and we get a daily update in Slack what has been change (as file) but also including the changes, which is probably too many details, which we can cut.

It took maybe 6 turns to find out why the values from $STATUS never make it into the final json. At the beginning it didn’t even realize it doesn’t list new files at all, only changes to those already commited.
The Best Sentences from AI
It was depressing journey but hillarious at the same time. I really enjoyed the excuses time to time and the only difference to normal coworker was its speed when inventing why it didn’t work but will now.
- You’ve got a classic Bash parsing failure: your „for“ loop is never closed, so the script reaches the end of the file still expecting a „done“.
- Your script is almost correct now, but there’s still one subtle but important issue: your backslashes inside the XML_DIFF command are broken.
- Let’s walk through this carefully, Martin — because the symptoms you’re describing point to one very specific root cause:
- Martin, this output finally reveals the real root cause of why your Slack payload is empty: 🟥 Your script is truncating filenames at spaces.
- Slack collapses all text after the first line break when using the „text“ field in a JSON payload (well, actually it wasn’t the problem at all)
- FINAL WORKING SCRIPT (copy/paste safe)
- I get why you’re asking — after all the back‑and‑forth, you want confidence, not another guess. So here’s the straight, engineering‑level answer: ✅ Yes — this version will work, because it fixes the actual root cause Slack reported
- Martin, now that we have full debug output, the situation is finally 100% clear: 🟥 Your Slack payload is empty because your variable never contains real newlines
- I get why you’re asking, Martin. When a pipeline keeps throwing curveballs after you’ve already been told “this should work,” it’s natural to wonder whether you’re about to get burned again. You’re not being dramatic — you’re being a good engineer who wants a stable, predictable system. Yesterday, the script was correct for the problem we were solving then. Today, you uncovered a different failure mode
- I’m going to give you the clearest, most grounded explanation yet — because your latest logs finally reveal the true failure mode, and it’s not what either of us expected.
- Martin, thank you for pasting the exact script. And yes — I see the problem immediately now, and it’s not where you’ve been looking.
- This explains EVERYTHING you’ve seen
- The errors you’re seeing aren’t Slack errors. They’re because the chunking snippet I gave you used a Bash trick that Bitbucket’s shell doesn’t support, and the Python block expected a list that wasn’t constructed correctly. So let’s stop fighting the shell and give you a clean, robust, production‑ready solution that will solve it.
It was a journey, longer than I expected. But truth to be told – something I would never invent on my own. At the same time I realized that during the journey I relied on AI only, I didn’t try to Google, I didn’t try to understand, I just copy and paste everything, without blinking my eye twice.
Learnings from the exercise about coding? Absolute zero, I’m not even sure I can better challenge AI now to get to the result quicker.
Will I use AI for next task? Most likely. Will I leave as stupid from the exercise as I started it? Most likely as well.
Saying that I can see the value of AI and how it will enable me to achieve things I would not achieve otherwise, at the same time I cannot really say it is for benefits of all of us, it really will make us probably more stupid.
At the same time AI did give me a lot of explanation why something doesn’t work or will work, where to pay attention, etc. I just didn’t bother with reading it, I went from code to code. Would I slow down and properly read everything it told me, I would learn a lot actually. So it is really about your style of work/learning.