I have 19 years of experience within the Information Technology Space, the past 10 of those years have been spent in the capacity of supporting various project requiring automation and the ability to “put pieces together and figure it out”. Just like most of my peers, I am cautious about the enablement of “AI” within our sphere, but that caution has given me the ability to use AI to my benefit. This blog is not meant to be a philosophical debate, but one that shows what AI can do for you from the perspective of someone like myself.
Recently, AI enabled me to quickly create a validation script for each environment of a project. I was thoroughly impressed by the comments it put in my code that indicated why the code I had added to what I was working on was functional and met my requirements.
First, let’s take a step back and look at what I needed to accomplish. My team has been working on a complex process with many moving parts. Each component of this process requires its own configuration and some of these components require calls to external services from Databricks. I needed a validation script to enable my team to find missing configs within each environment for promotions, and a way to validate the APIs that it was reaching out to were functioning prior to testing the environment we were deploying too.
I had an idea of what I wanted for this in my head and in a first pass, I decided to write 0 code to see what Copilot would give me. What it gave me was what I asked for; a complicated, elongated script with a lot of repeated code. It came out to be around 1200 lines of code, and while the code didn’t look terrible, it was not what I was looking for. I did not need a complicated script for this. I just needed to loop through configurations files in a directory and validate if some values exists in the environments keyvault, or hit an API based on the values in the configuration files. My first attempt was not what I would have developed on my own, so I spent some time processing the prompt I gave CoPilot to try and come up with a better approach for my goals.
My second attempt at this was to write out the basic logic for how I wanted to loop through the files, and add comments where I wanted to have the calls to validate what ever configuration I needed to validate. I then instructed CoPilot with an updated prompt to locate those sections of my script, and update it with specific instructions for each API, and generic instructions for anything that was secret related. The results impressed me, I was able to get the script from 1000 lines with just an LLM prompt, to ~150 lines. Initial runs of my validation script were working, but still required some manual intervention. I noticed that the API call code it generated didn’t check status codes, it was only checking to see if an error was thrown from the requests library in python to determine if there was an error. I added the logic for the status codes I deemed meant the test was acceptable based on how my understanding of the API worked. I then used CoPilot to generate some more code around some other things I needed and when reviewing my code for the API Status Codes that I had added, I noticed a comment on it that I did not add. It saw the status codes I was looking for, reviewing the body of the post request going to the API and added a comment explaining why that status code was acceptable.
To me, this was impressive, the fact that it was able to understand my code without any of that being in the prompt meant that the LLM understood what I was trying to do, and it “agreed” with what I determined was an acceptable test for the API’s. While this may seem small and insignificant to some, it showed me that small minor tasks that I would have just brut forced my way through were now more easily done through an LLM and actually are saving me time. The advancement in code generation this past year is mind blowing and if at the start of 2025 if someone asked me if I would be ‘vibe coding’ a year later through basic tasks in my processes, I would have said you were wrong. It turns out, I was wrong, and AI generated code has improved immensely.
We are on the precipice of something that will change the way multiple industries and it is already starting to do so. It is up to us, the stewards of tech, to not only set policy in place for use of these tools, but to also have that philosophical debate on how and when to use it.
We should not fear AI as the boogeyman of tech but embrace it and responsibly steward its advancement throughout our industry.
To discuss how Spyglass MTG can support you on your AI journey, contact us today!