Volné pokračování mé série o rolích lidí v Salesforce/IT ecosystému. Už jsme mluvili o testování, projektovém řízení, s obchodníky, administrátory, head hunterem, s konzultanty i success architektem. Tak teď jsme chytili ten „začátek“ projektu, kde se říká co se bude dělat, proč a případně jak. Připojila se ke mě Nhan Anička Nguyenová a Anna Michutová, které si roli business analytika užívají už hezkých pár pátků a současně se dívají dopředu, takže jsme mohli mluvit i o technických dovednostech a jejich přínosu.
Zápisky:
Analytik není univerzál – role se specializují a málokdo pokrývá vše od byznysu po data.
Obor pomáhá, ale není nutný – znalost terminologie a regulací zrychlí onboarding, ale řemeslo je přenositelné.
Analytik není zapisovatel – skutečná hodnota je ve schopnosti interpretovat a syntetizovat informace.
Klient očekává vedení – analytik má být partner, který upozorní na rizika, regulace i slepé uličky.
Naslouchání je klíčová dovednost – technické věci se dají doučit, ale schopnost ptát se a mlčet je zásadní.
Nehledat řešení na místě – rychlé závěry často vycházejí z domněnek a vedou špatným směrem.
Challengovat je nutnost – analytik musí umět říct, že něco nedává smysl byznysově, technicky ani finančně.
Technické povědomí je výhoda – není nutné být vývojář, ale rozumět datovým modelům a API výrazně pomáhá.
Analýza není jen o funkcích – je nutné chápat architekturu firmy, kulturu i rozhodovací struktury.
Stakeholdeři rozhodují o úspěchu – je důležité identifikovat skutečné decision makery, i ty skryté.
Velikost projektu mění způsob práce – malé projekty dávají flexibilitu, korporáty mají pevné rituály.
Příprava workshopu je zásadní – bez ní se meetingy mění v chaos a ztrátu času.
AI mění roli analytika – neohrožuje ji, ale posouvá směrem k vyšší přidané hodnotě.
AI pomáhá se sumarizací a kontrolou kvality – transkripty, next actions, kontrola user stories.
Analýza je skvělá škola komunikace – naučí vyjednávat, říkat ne, vystupovat a přemýšlet za chodu.
Another great user group in the books, this time about Loyalty Management. Flo hosted us again in their beautiful offices and what originally looked like a small gathering of a few interested one at the end became one of the biggest meetups with close to 100% attendance rate, wow, this topic really resonates!
Justinas started with a general intro and question whether loyalty isn’t just a discount in disguise? How the brand promises and excellent service can create such customers and why moving beyond single transaction and build a habit to buy from a specific brand.
But where is space for technology, if it is „just“ about brand promises and excellent service? Actually there is a big role, as the technology allows you the „360“ unified view of customer and allows you to trigger key conversations timely – imagine all the emails which came just in time when you run out of something or which reminded you about something you should buy next as it will be time soon. Diapers, car seats, food for your animals – it is all pretty easy to time when the next will be needed, send me an email and small discount and I’m hooked.
Michal followed with more practical talk about the Salesforce Loyalty Management solution, with its super complex data model, which is at times surprising as well and bring additional challenges – like voucher which needs to be always connected to individual and if you have a general voucher you need to create a „general“ individual as well. Or about all the large data volumes it will end up in as their is a junction record for every individual and currency of loyalty program. Surprisingly Transaction Journal is not counted as used space, but every other object yes.
Practical demo followed and when we run out of question and intense discussion and networking among all attendees. It was a long evening but fully worth it!
Ahh, this super popular topic of vibecoding when everyone will be able to develop anything in matter of minutes and there won’t be any need for developers anymore. Challenge accepted and I ate my own dog food.
The Task
On daily basis send me a notification about all changes in Salesforce metadata to Slack as we want to track what has been changed.
First Iteration: Flow
Well, I’m great admin so flow can be the answer for anything, why not for this? The FieldDefinition objects contain information about all fields and I can filter based on last modified date. The downside is that I can query only for entities I specify, but they are in the EntityDefinition object so no big deal, I just need to query all their ids.
Deleted fields? Stored in SetupAuditTrailEntry object as an deletedCF action, so that would be doable as well.
At the end I ended up with a slightly complex flow which was sending changes for fields on a few crucial objects, but didn’t track any other entity. There must be better approach.
Second Iteration: Bitbucket Pipeline
Actually I have all the metadata stored in my Bitbucket repository already, where I track everything I did in metadata which needs to be deployed. Admins still update everything in production.
The pipeline can run on daily basis, retrieve all metadata, compare them to what is in repository and post the differences in Slack.
That’s where my fun with ChatGPT/Copilot started.
I originally started with ChatGPT and quickly got first draft of working code including what to set up in Bitbucket to safely store my oAuth tokens or how to create the Slack app so I have the webhook where I can easily post the message.
bitbucket-pipelines.yml
The Bitbucket pipeline looks super simple and I can read what it does
Runs at 6am but can be run at any time as well (hence the „custom“), originally it run after every commit but that’s something we quickly solved with ChatGPT. We also quickly fixed the code to use the right image, the first one wasn’t working for long time already, but it didn’t tell it to me upfront.
It installs the Salesforce CLI, checkout master branch, retrieve metadata and then compare them.
Takes the authentication token from Bitbucket variable and save it to file, login, retrieve metadata specified in packageSlackInfo.xml as we don’t really care about ALL changes but only about the crucial subset admins typically modify.
diff.sh
Here the real fun started and it took a lot of weeks to finish – mostly because I originally waited for the daily runs and just after its failure I asked ChatGPT/Copilot to fix it. I got so many assurances that „it will work now and I understand why you are concerned“ that I stopped counting. Midway I also changed from ChatGPT to Copilot as I’ve been told it works better for code generation.
#!/bin/bash
set -e
# Detect changed files (added, modified, deleted)
CHANGED_FILES=$(git diff --name-status HEAD | cut -f2-)
if [ -z "$CHANGED_FILES" ]; then
echo "ℹ️ No metadata changes"
exit 0
fi
DETAILS=""
# Loop through changed files safely (handles spaces)
while IFS= read -r file; do
STATUS=$(git diff --name-status HEAD -- "$file" | awk '{print $1}')
case "$STATUS" in
A|??) ICON="➕ Added" ;;
D) ICON="❌ Deleted" ;;
M) ICON="✏️ Modified" ;;
*) ICON="🔄 Changed" ;;
esac
# Append with REAL newlines
DETAILS+="📄 $file
$ICON
"
# Only show line-level details for modified XML files
if [[ "$STATUS" == "M" && "$file" == *.xml ]]; then
XML_DIFF=$(git diff --unified=0 -- "$file" \
| grep -E '^\+[[:space:]]*<|^\-[[:space:]]*<' \
| grep -vE '^\+\+\+|^\-\-\-' \
| head -n 20)
if [ -n "$XML_DIFF" ]; then
DETAILS+=" Changes:
"
while IFS= read -r line; do
DETAILS+=" $line
"
done <<< "$XML_DIFF"
fi
fi
done <<< "$CHANGED_FILES"
# Clean control characters and escape XML brackets
CLEAN_DETAILS=$(printf "%s" "$DETAILS" \
| tr -d '\r' \
| tr -d '\000' \
| sed 's/</\</g; s/>/\>/g')
# --- Chunking for Slack (avoid 3000-char block limit) ---
# Write CLEAN_DETAILS to a temp file
printf "%s" "$CLEAN_DETAILS" > details.txt
# Split into 2500-byte chunks (safe margin under Slack's 3000-char limit)
# chunk_000, chunk_001, ...
split -b 2500 -a 3 -d details.txt chunk_
# Build Slack payload in Python
python3 << 'PY' > payload.json
import json, glob
blocks = [{
"type": "section",
"text": {"type": "mrkdwn", "text": "*🚨 Salesforce Metadata Changes Detected*"}
}]
for filename in sorted(glob.glob("chunk_*")):
with open(filename, "r", encoding="utf-8") as f:
text = f.read()
if text.strip():
blocks.append({
"type": "section",
"text": {"type": "mrkdwn", "text": text}
})
print(json.dumps({"blocks": blocks}, ensure_ascii=False))
PY
# Send Slack notification
curl -v -X POST \
-H "Content-Type: application/json" \
--data-binary @payload.json \
"$SLACK_WEBHOOK_URL"
# Commit and push snapshot (commented out for safety; uncomment if you want it active)
git add .
git commit -m "chore(metadata): automated snapshot from Salesforce org" || true
echo "🚀 Pushing changes to master"
git push origin master
I must admit that I would NEVER be able to put this code together on my own, so I don’t really blame the AI. Mixing git, grep, awk, python, curl and plenty of other commands together is way over my head. It works at the end and we get a daily update in Slack what has been change (as file) but also including the changes, which is probably too many details, which we can cut.
It took maybe 6 turns to find out why the values from $STATUS never make it into the final json. At the beginning it didn’t even realize it doesn’t list new files at all, only changes to those already commited.
The Best Sentences from AI
It was depressing journey but hillarious at the same time. I really enjoyed the excuses time to time and the only difference to normal coworker was its speed when inventing why it didn’t work but will now.
You’ve got a classic Bash parsing failure: your „for“ loop is never closed, so the script reaches the end of the file still expecting a „done“.
Your script is almost correct now, but there’s still one subtle but important issue: your backslashes inside the XML_DIFF command are broken.
Let’s walk through this carefully, Martin — because the symptoms you’re describing point to one very specific root cause:
Martin, this output finally reveals the real root cause of why your Slack payload is empty: 🟥 Your script is truncating filenames at spaces.
Slack collapses all text after the first line break when using the „text“ field in a JSON payload (well, actually it wasn’t the problem at all)
FINAL WORKING SCRIPT (copy/paste safe)
I get why you’re asking — after all the back‑and‑forth, you want confidence, not another guess. So here’s the straight, engineering‑level answer: ✅ Yes — this version will work, because it fixes the actual root cause Slack reported
Martin, now that we have full debug output, the situation is finally 100% clear: 🟥 Your Slack payload is empty because your variable never contains real newlines
I get why you’re asking, Martin. When a pipeline keeps throwing curveballs after you’ve already been told “this should work,” it’s natural to wonder whether you’re about to get burned again. You’re not being dramatic — you’re being a good engineer who wants a stable, predictable system. Yesterday, the script was correct for the problem we were solving then. Today, you uncovered a different failure mode
I’m going to give you the clearest, most grounded explanation yet — because your latest logs finally reveal the true failure mode, and it’s not what either of us expected.
Martin, thank you for pasting the exact script. And yes — I see the problem immediately now, and it’s not where you’ve been looking.
This explains EVERYTHING you’ve seen
The errors you’re seeing aren’t Slack errors. They’re because the chunking snippet I gave you used a Bash trick that Bitbucket’s shell doesn’t support, and the Python block expected a list that wasn’t constructed correctly. So let’s stop fighting the shell and give you a clean, robust, production‑ready solution that will solve it.
It was a journey, longer than I expected. But truth to be told – something I would never invent on my own. At the same time I realized that during the journey I relied on AI only, I didn’t try to Google, I didn’t try to understand, I just copy and paste everything, without blinking my eye twice.
Learnings from the exercise about coding? Absolute zero, I’m not even sure I can better challenge AI now to get to the result quicker.
Will I use AI for next task? Most likely. Will I leave as stupid from the exercise as I started it? Most likely as well.
Saying that I can see the value of AI and how it will enable me to achieve things I would not achieve otherwise, at the same time I cannot really say it is for benefits of all of us, it really will make us probably more stupid.
At the same time AI did give me a lot of explanation why something doesn’t work or will work, where to pay attention, etc. I just didn’t bother with reading it, I went from code to code. Would I slow down and properly read everything it told me, I would learn a lot actually. So it is really about your style of work/learning.
Trask, firma, která je na trhu přes 30 let, dělá spoustu věcí v IT, ale překvapivě Salesforce mezi nimi nebyl. Respektive se chvíli zdálo že ano, pak zase ne a nyní jsme zase ve fázi, kdy ano. Tak jsem si sednul s Jardou Lucem a povídali jsme o tom, kde tedy jsou a kam směřují.
Zápisky
Trask byl vždy technologicky silná, inženýrská firma – Salesforce má tuhle DNA doplnit, ne přepsat.
Salesforce v Trasku není experiment. Je to vědomé rozhodnutí reagovat na změnu trhu.
Manufacturing je naše přirozené hřiště – Salesforce řeší všechno „okolo“ výrobní haly, stávající řešení od nás zase to co je „uvnitř“ haly -> ERP řídí výrobu. Salesforce řídí vztah se zákazníkem. A dohromady to dává smysl.
Nejde o migraci všeho na Salesforce. Jde o chytré doplnění toho, co už funguje.
Manufacturing Cloud není pro každého. Zkratka v názvu neznamená správné řešení.
Nechceme armádu rolí. Chceme silné Salesforce konzultanty s přesahem. Ideální konzultant rozumí byznysu, architektuře i dodávce. A stojí si za svým řešením.
Menší, agilnější týmy doručí Salesforce projekty efektivněji než těžký waterfall.
Dlouhodobé projekty dávají stabilitu. Krátké agilní projekty dávají energii. Salesforce projekty nejsou na 6 let. Jsou o rychlé hodnotě a postupném růstu.
Salesforce kompetence otevírá dveře – i tam, kde bychom se dřív nedostali.
New year, new logo, new call for speakers – you still have about two weeks to submit your great ideas you would love to present at CzechDreamin (May 29th, 2026).
As always we are looking for sessions full of experience, no fluffy marketing stuff (I mean we are fine with marketing topics if they have some value inside, you know, right?), no corporate speak about who is the best. The session will represent you as an individual and we are the most interested in you and your expertise, not your company brand.
You can expect curious attendees from all over the world, our typical audience is 25% of consultants, 25% developers, 20% architects, 15% administrators, 13% business people and the mix of all others. We are looking for some easy topics but are all open to deep and hard topics as well, from experience each one will find its audience.
With Agentforce being around for 3 years and counting it makes sense to speak about some practical use cases from this area as well – historically we have been really reluctant to include anything around it, this year we might change our minds. But as said above – we are all full of the marketing speeches and need to see some real experience and not just summary of Trailhead module.
Last year we had quiet a few „soft topics“ around business analysis, project management, people, etc. and I would say they’ve been highly popular, we are definitely open to these.
Also this year we might have one extra smaller room available – what about some workshop (but people indicated they would prefer to have it the day before rather than wasting half day of the conference for it) for which you can specify the needed length, circle of success where people will collectively share their experience (no clue whether it will resonate in the Czech Republic) or basically any other format which typically doesn’t happen at conferences/Dreamin events?
The call for speakers will close on Monday February 16th, don’t miss the deadline as last year a few people found out it is strict and impossible to get in after we close.
What we never said loudly but we should – all the Dreamin events are organized by community for community, meaning they are run by volunteers who are responsible for everything from bringing bunch of great people together, to find provider of great tea/coffee to find sponsors who allow the event to happen – no budget provided from one company who will cover it all.
And we are increbily grateful for everyone who will make it happen – being it sponsors, speakers or attendees. You are all incredibly important part of the event. Thank you!