I’ve been a “slow adopter” of ChatGPT and other LLM products, but I’ve finally gotten to the point where I am using them on a semi-regular basis.
For many years, I’ve used transcription and translation software, but I’m not sure which of the apps I’ve used count as “AI” in the sense of LLM-driven software. I’ve used Dropbox’s built-in transcription feature on various audio recordings, and Google and Microsoft have long had very good machine translation tools. I imagine some of these pre-ChatGPT tools are now using some sort of LLM behind the scenes at this point, but I’m not really sure.
Nowadays, I’m using MacWhisper for most of my dictation and transcription needs. It’s a little finicky sometimes, but it might be the first AI app for MacOS that I’ve had success using. I initially downloaded it for transcribing audio files, but I was surprised at how handy the real-time dictation feature was. It has saved me a lot of time when sending quick messages to people over text/Discord/Slack/etc. And I like the fact that the model runs locally on my computer, which means I don’t need to pay as I go and it’s more private.
I’ve also started using it to correct my Spanish. I haven’t been able to find anything like Grammarly for Spanish, and the scant tools that do exist in that space aren’t too good at catching the types of errors made by non-native speakers — sentences that are grammatically correct but awkward-sounding. Until recently, I relied on asking native speakers to proofread my work if I didn’t feel too guilty to bother them. Now, I often run it through ChatGPT.
I’ve also started using LLMs a lot for my work as a web developer. The most useful thing to me is being able to paste an error log into ChatGPT and getting it to parse and explain it and perhaps offer solutions or steps for further debugging. I haven’t really gotten the hang of using GitHub Copilot or any other code editor prediction tool. I want to, but they always feel more annoying than they’re worth.
My primary frustration with asking ChatGPT coding questions is that it “hallucinates” — that is, makes up stuff out of thin air when it doesn’t know the answer. In fact, it’s kind of like a human in that respect, at least in the sense that unlike traditional computing, it can be wrong. You have to learn to value its feedback while taking what it says with a grain of salt, like you might do with a coworker. However, I find AI hallucinations to be much more frustrating than the kinds of errors that a fellow human might make.
The other day, I was trying to learn the Interactivity API, Wordpress’s own declarative and reactive JavaScript framework, similar to the famous Alpine.
“Does the interactivity API have an if directive or equivalent?” I asked
“Yes – the Interactivity API includes a built-in data-wp-if
directive for conditional rendering,” it boldly responded (emphasis in the original).
I insisted I couldn’t find such a directive in the docs. Then it backtracked it’s statement to say it doesn’t exist in current versions of WordPress and went on to imply awkwardly that it used to exist or was in an experimental PR. I didn’t look into it any further to verify if that implication was true, but it seemed weird that it came short of affirming it outright. Specifically, it told me:
Anything you may have seen that looks like that is either:
- an example from an experimental PR, or
- a third-party helper layered on top of the core API.
Anything I may have seen like that!? “You’re the one who told me that, just now,” I wanted to scream.
So in addition to hallucinations, I often find myself in these kinds of circular conversations with the machine. I’ve been told that if I paid for ChatGPT, I’d have access to better models, but I’m on the fence. I’m already paying for GitHub Copilot, which includes a chat feature I haven’t found to be much better in this regard. I haven’t used it as much though; I probably should.
Despite these frustrations, I’ve had other instances where potentially hours of work were saved because ChatGPT (or increasingly Google’s Gemini) came through for me.
For example, there is a hidden setting in WordPress, usually stored only in the database’s wp_options
table, called upload_url_path
that sets a custom URL path to all the images used on the site. Somehow it got set to the wrong value on a site I was working on the other day and I couldn’t figure out why my images wouldn’t load. I would have spent who knows how long staring in confusion if it had not been for ChatGPT telling me about this obscure database record.
I can go on with other examples of tedious problems AI helped me solve without having to scour Stack Overflow or other blogs and message boards: a missing WordPress theme support I needed but always forget about, stray WordPress multisite-specific constants I needed to remove to fix a single site install, issues with HTTP Cache-Control and Content Security Policy headers, how to make an arrow shape in an SVG animate length without affecting other proportions, what certain characters meant in a complicated regex expression in an Apache .htaccess
file … and the list goes on and on.
I feel guilty though, because I like Stack Overflow and other tech help sites. Or at least I appreciate them, and I feel bad that Google Gemini searches them for all the info I need and gives me an answer without me even needing to visit the sites themselves. It does feel extremely unfair to said websites that they have been repurposed into data fodder to keep the AI beast going.
And the toll that these ligtening fast AI crawlers take on servers is tremendous. I’ve noticed lately in my own work that DoS attacks are on the rise, and I can’t help but wonder if at least some of it is just these AI products hitting 10-30 sites at a time to generate an answer for someone.
Anyway, I just wanted to log an update on the current state of my AI usage. I haven’t grown as adept at leveraging it as most of my friends, but I’ve been surprised at how much I find myself using it lately. There are things I don’t like about it, including ethical dilemmas beyond just the abusive webscraping. Nonetheless, I feel good to have gained some insight into where it shines and where it doesn’t, and that also makes me less fearful of it. It is less foreign to me now, and while it’s impressive, I don’t think it’s anywhere close to apocalypse-inducing levels of tech advancement.