We're all doing it. "C'mon... it's just one time. What can it hurt?"

We put our first coding question into a GPT and actually get a good response back. And we're hooked! We got actionable code that actually works. Might need a little cleanup for my use case, but it definitely saved me a lot of time.

We go a little further and add Copilot or install Cursor. We integrate it into our regular workflow and are using it for everything we build.

In general, there is absolutely nothing wrong with this! As a developer, I can treat AI as a junior developer. Create a task, give it a scope with specific aims and get a "PR" ready for review almost instantly. I can review it, make changes, and approve them. I can then have it automatically write new tests for the changes and document anything needed in the readme. It's great!

KISS... "Keep It Simple Stupid?" No! It's, "Keep It Stupid Stupid"

The problems start when AI starts to get... well, stupid.

We've heard about the hallucinations. AI will assert, with the utmost confidence, something that it completely made up. Call it out and it is quick to confess it's transgression.

Those are annoying, but easily spotted when coding. Your code just won't work. There are also great tools that can help prevent this, like the newly released Laravel Boost MCP.

These are just annoying. But what about when the stupidity... works?

Sometimes that can be just really inefficient, spaghetti code. A recent example of this is for a relationship between Engines, Posts, and Chunks that we're using for a new SaaS tool we're creating to enable AI-powered search for WordPress, SearchRovr.

This is what AI generated:

Looks really cool and technical right? It's gotta be good because it's doing so much! In reality, almost all of this is replaced with:

$posts = $this->engine->posts()->paginate(20);

Which one of those is going to be more reliable and easier to follow, maintain, and modify over time?

When stupid becomes dangerous.

Stupid code can have issues. But, for 90% of applications, you may never see the consequences of that. It will just work, in the background, ugly as sin ready to bottleneck an application but the bottleneck never comes.

However, sometimes the stupidity becomes dangerous.

Within this same application, we authenticate requests using access tokens scoped at an organization level. The flow should be, "if the requesting user owns the engine, they can modify the engine in the request." Would look something like:

 $org = $request->user();
        
 if ($org->id !== $engine->team_id) {
     return response()->json(['message' => 'Unauthorized'], 403);
 }

AI generated:

$org = $request->user();

// What are you doing AI! This isn't authentication!     
if (!$org->id) {
    return response()->json(['message' => 'Unauthorized'], 403);
}

See the problem? It was just requiring an authorized user. It didn't care whether or not you had the right to modify the requesting engine!

So what's the big deal? That's an easy fix

It is an easy fix. If you're looking for it. If you're taking the time to validate your authentication. I'm not a "vibe" coder, I am looking for these types of things. But, a lot of people are writing software with the only goal of "it works".

There are some that are reviewing, writing tests with appropriate logic, and able to catch the logical flaws even if they don't have the technical, development chops.

Others... well they're just leaking your data, causing divorces, and sparking countless defamation lawsuits.

With this pivot of anyone being able to create and launch almost anything, is it too good to be true? Can we ever really just trust software again?

Shameless plug...

We care about these things. We're real people building a real business. AI is a tool that we're using, not the toolkit. Whether it's client projects for LimeCuda or our own products (SearchRovr), we work to make sure they work, are reliable, and secure.