

I don’t have hard evidence for this (might try and find some at some point though), but I feel like outages have become progressively more common in the last 4-5ish years.
Feels like every time the AI tools “get better” there’s an increase and no one gives a shit. Like, what the fuck? When did stability and reliability become so irrelevant to people?
Hell, GitHub might as well just close up shop with the amount of outages it’s had recently! I get that the bubble is a bubble but how has AI not cost companies enough in outages to show it’s a waste???



Agreed, I use AI for coding the way I used to use Stack Overflow:
Find me code examples, explain an error or give me some summaries of how something should work but then I go confirm that or test it separately!
I never trusted Stack Overflow for anything more than pointing me in the right direction for what to research. Documentation links, snippets, blogs etc, that sort of stuff. As long as I tell the AI to give me a reference, about 60-70% of the time I’ll get something I can actually use to confirm the code it gave me or get an answer for my question. At best what it does is save me googling time.