AI is the April Fools that Never Ends

I was keeping myself busy being frustrated at the whole AI phenomena yesterday, so I decided to turn it into a post.  And what better day to write about AI than April 1st, the traditional day to celebrate hoaxes and outright fabrications.

Tell me how this makes you feel…

There is so much to be frustrated about when it comes to AI, it can be hard to sort it out.

I am not so concerned about them being plagiarism machines.  I mean, have you met humans before?  Originality is so rare in us that any bit we can spot in the endless repeats and call backs is immediately stolen, appropriated, or otherwise claimed and reused by all who can manage it.  Theft is practically our default state, so of course we automated it.

Neither am I all that concerned about how BAD AI is at the moment or how unlikely the whole “but if we just throw more data at it maybe THEN it will work” plan is to play out.  I grew up in Silicon Valley and have worked here my entire life.  “Fake it ’til you make it!” is our motto, and riding a bad idea down the drain, promising it will all work out as promised right up until they are hauling away the cubes and office chairs has been the modus operandi of start ups for more than 40 years.  So many bad ideas, so much poor execution, so many liars who knew the tech was nowhere close to what they promised given a pass to start again.  Welcome to the valley.

I do get miffed when things are presented as AIs when they are pretty much what one would call “software.”  I heard a tech pundit the other day lauding AI because his son had “downloaded an AI” that helps him track when exclusive tennis shoes sale drops are happening.  Look doofus, if you can download it to do a simple task then it either isn’t AI or the bar for what counts as AI is so low that AI simply means software now.

But when I dig down into it, I really have two major gripes with the current state of affairs with AI.

The first is how many pundits, people experienced with the ways of Silicon Valley, people who should be able to smell the bullshit from a mile off, people who have a history of calling things out, are just blandly accepting that AI is what the people selling AI say it is.

AI is not like that student in class or co-worker who is currently putting in just enough effort to get by, but who could be brilliant if only properly motivated.  There is no intelligence in AI, to the point that even some of the AI companies are backing off on that and putting forth the more literally correct “large language model” terminology when speaking of it.

AI is a probability machine.  It is literally the digital refinement of having an infinite number of monkeys at an infinite number of typewriters attempt to produce the works of Shakespeare.  AI doesn’t “know” or “understand” anything.

AI, putting an infinite number of monkeys out of work!

It is just faster monkeys with a filter that keeps complete gibberish from being produced by having a model that puts words in a specific order because it has a database full of examples to compare against.  AI doesn’t know what a matching subject and verb is, it only calculates that, mathematically speaking, certain nouns go with certain verb conjugations.

The actual output can be, and often is, absolute lies.

I have done a few attempts at testing AI to see if it could, in essence, regurgitate known facts, like the names of all of the EverQuest expansions.  I wasn’t asking for intelligence, I was asking if it could plagiarize a Wikipedia entry.  And the three I tried couldn’t do that.  They not only couldn’t do it, when they MADE UP expansions and I called them on it, they responded that they had been incorrect.

This is AI.  A chat bot that gives you the answers it mathematically believes you want to hear without validation of their correctness.  There is an old saying about not trusting somebody to tell you the weather or if the sun rose in the east, and it applies in spades to AI.  If the model somehow decides that the sun rises in the west and that it is warm and sunny despite the snowstorm currently raging, it will tell you that.

Meanwhile, every AI image that is kind of cool… the ones that get the right number of hands on people, the right number of fingers on hands, and little details like that… because, again, no intelligence here, just probability that in a scene like this a finger or arm in a particular place has a higher mathematical likelihood than not… and gets shared around is built on a huge pile of bad images that make no sense at all, that wouldn’t fool anybody into thinking an actual person did it.

And yes, my little probes into AI are hardly substantive.  I have been asking silly questions about niche or esoteric topics to a few freely available AI bots.  You could fix the more egregious issue, you could do something to validate facts, you could make it no worse than Wikipedia or PACER on some fronts.  But that would take up time better spent trying to get the monkeys to be faster as far as the AI enterprises are concerned.  Damn accuracy, go monkeys go!

A side part of this is the garbage feedback loop AI will be creating, where AI generated articles and comments that are simply making up things will then be used as fodder to feed the voracious appetite for data to train the language model, so that lies will end up cemented into the foundation of the model and somebody will ask for a list of EverQuest expansions and see The Threat of Faydwer included… or maybe they will ask for something important and get an equally bad answer, only they won’t be pursing trivia but making a business or legal decision based on that answer.

We’ve already seen it, and it will happen more and more as the false promises about how AI can replace people spreads far and wide.

Which brings me to the final point of frustration, which is just how hard Wall Street and the Venture Capitalist class have bet on AI.

Rich people HATE paying smart people a large salary to do something they cannot do themselves.  The tenets of capitalism only apply when it favors them.  The audacity of somebody being able to demand a large salary merely due to market scarcity is anathema.

Wall Street will only be happy when the divide is the ultra rich and the minimum wage worker, with as few people as possible in between, and AI feeds into this obscene fantasy.  Expertise must be automated and made a commodity at all cost.

They may not even hate the people they want to replace in any personal way.  But the biggest expense at every company is always the people, and all the more so in high tech, where most companies would cease to have any value at all if you removed the staff.

So AI is the panacea, the promised land for Wall Street, a way to replace people. They believe that only by reducing payroll can they can achieve their promise of infinitely increasing shareholder value.  It isn’t like the signs are not already out there.

When Fortune is saying it is too much, maybe stop and think?

The Gilded Age, for those needing to come up to speed, was the height of corporations and trusts that… the robber barons… that even their wealth couldn’t stem public outrage forever.

Wall Street demands eternal increases in shareholder value… the line must always go up… and the way forward is to cut staff and raise prices.  Layoffs and greedflation will work in the short term, though once again the inability to see obvious issues with the plan is astounding.  When we’re all minimum wage serfs will they start reducing the quality of our gruel?  How will we be able to afford shiny new iPhones to keep the economy going?  I am sure the company store that serves the company housing we’ll live in with have a finance plan for us.  It will be Ready Player One without the metaverse aspect to keep us all distracted and compliant.

And that reference loops back into why Wall Street is going so hard on AI; the last few get rich quick schemes, things like blockchain and the metaverse, both brazen attempts to be landlords on the net, collecting a tax on users without providing any value, have pretty much collapsed.  You can still find an occasional headline about them over at VentureBeat, which is the magic mirror of the VC soul… if they have souls… reflecting back exactly what they believe in at any given moment.  And they believe in AI because they don’t have another gimmick handy.  Their last success, cloud computing, itself another rent-seeking course at its root, has been normalized, but can no longer be depended on to make the line go up.

Maybe they’ll find another false prophet to chase, but the allure of AI… the ability to fire all those brains who got into college on merit rather than being a legacy or because daddy donated or through an expensive applications coach… is so strong and they want it to be true so badly that billions will go to stand up giant, environment destroying, resource devouring data centers so that they can eliminate a few analyst positions without any thought to the absurdity of it.

Do we even get to choose if AI is good or bad?

There is something that doesn’t come up a lot, the data center needs behind generating an incorrect list of EverQuest expansions.

Don’t misunderstand me.  I am not calling for a revolution here.  In the end I believe an economy based on capitalism delivers the most good.  But money accumulates at the top… trickle down economics was always a lie… and money distorts politics which means there is no political will to stop a small number of very rich people from min-maxing the economy in their favor so they can amass wealth beyond any human need many times over.  In my lifetime we’ve gone from the dominant business theme that if the company does a good job it will make a profit to the company’s only purpose being ongoing and ever increasing profit at any cost.

Anyway, that went kind of dark.  I was really on about AI, but couldn’t help getting into root causes, which always leads back to late stage capitalism excess.  But, as noted, I have spent a lifetime in Silicon Valley.  I have seen the VC machine up close, had no less than John Doerr walk in to our lab and demand we strip RAM modules from our test machines to put in his PowerBook (as it turned out, our PowerBooks only had the base RAM allocation as well), and have lived through many cycles of trying to meet a short term quarterly financial goal by hamstringing the company in a way that would harm it in the long term, because the long term doesn’t matter.  All that matters is that the line goes up today!

TL;DR – It would be nice if some pundits who actually know better would mention that the AI emperor has no clothes.

Also, part of this was triggered in my head by a thread over at BlueSky which goes into some of the foundations of what is being peddled right now, which started up around the New York City chat bot which, as you might expect, is making stuff up.  It boggles the mind.

Finally, all art included was, of course, AI generated.  I never said I wasn’t part of the problem.

21 thoughts on “AI is the April Fools that Never Ends

  1. bhagpuss

    Great post. I agree with everything you said but especially with your observations on where “AI” is now and the indescribably huge gulf between that and where far too many people claim it is.

    I posted about AI fairly recently, explaining the reason I’ve stopped posting about AI. For a while the news around AI was amusing, entertaining, even occasionally exciting. Now it’s just tedious and repetitive. This “AI” does the thing they all do only a bit faster or a bit less badly. Big deal.

    If AI actually worked, I’d be delighted. I’d love a reliable, accurate virtual research assistant. It would save me hours every week if I could just type – or even better speak – a question and get a sourced and referenced answer I could cut and paste into the blog without having to fact-check every single line. We are absolutely nowhere even close to that.

    As for the art “AIs”, they make for fun toys for a while but I got bored playing with them a while back. It’s actually much more fun to do what I always used to do and mess around with screenshots or photos in a paint package. I’d still use AI for illustration on occasion but it all looks so samey it’d have to be very occasionally.

    The whole “Everyone’s going to lose their jobs” thing is just a pit-trap waiting for any company gullible enough to walk into it to vanish without trace. Sure, you can replace human-generated garbage output with “AI” garbage but any business that relies on either accurate data or aesthetic value is going to crash and burn with the current crop of “AIs”. Have you seen the supposed cutting-edge, best-in-show all-AI videos for music? They look like badly-animated slide shows. Literally anyone with an iPhone could produce a better video in half an hour.

    The one half-way impressive “AI” I’ve seen recently is Suno, the music AI. I fiddled about with it the other day and it produced a pretty convincing two minute indie-pop song on my first attempt from a single prompt. I’m fairly sure I could put a quick video together (Without AI.) and slip that song into one of my music posts and no-one would notice. It’s an impressive technical achievment although it might say more about the generic nature of indie music than the quality of the software…

    Like

    Reply
  2. heartlessgamer

    For context; I’ve turned from “its a toy” to “there is something here” so am more optimistic about AI these days. We are also investing heavily at work and the payback has been almost immediate. AI is paying bills for us and its not hard for it to do so. 

    If anything what I am struggling with as a senior IT leader is how on earth are the Microsoft / Salesforce of the world think folks are going to pay for AI when there are so many freely available tools that don’t need to wait for the vendor’s platform to update to take advantage of whats new. We’ve found it easier to build our own chatbot and LLM on curated data relevant to us and integrate than wait for the MS/SFs of the world. All we need is the cloud cycles to power it.

    Also, I was having this debate on Bluesky the other day about “is AI just predictive text?” and I think a lot of that is relevant here.

    When you say things like “This is AI. A chat bot that gives you the answers it mathematically believes you want to hear without validation of their correctness.” I feel that is an inaccurate understanding of what AI is actually doing. Is there math? Sure, but this is not predictive text technology as your statement leans to which brings me to:

    “The actual output can, and often is, absolute lies.” This is why it is so different. It is able synthesize its answers from “what it knows” which means it as likely as a human to make a wrong connection and pump out invalid info. Or for it to be minimally educated in a specific topic and thus do what humans do: bullshit it’s way through it vs saying “I don’t know”. Your EQ example is perfect. If you took that to any average gamer they’d probably flub it as much as the AI did.

    Change that AI to a curated dataset with better information about Everquest and a defined persona of “a gamer that knows a lot about Everquest and should answer questions as such” it’d knock it out of the park and could likely answer all sorts of questions accurately. Your average AI in the wild isn’t slurping up EQ expansion content and your average AI user has no idea how to use it to do that (which you sort of admit you don’t know how to make the tech do what you wanted it to).

    This is why I also pushback on “AI will never work in games for dynamic NPCs”. Sure, taking ChatGPT and expecting it to provide a game specific experience is a fools errand. BUT take the technology, provide it general language capability, and then give it a curated data set of the game world and it will 100% be able to provide a dynamic experience.

    I’ll make some bold predictions:

    1. AI NPCs based on LLMs (and similar models) will be the standard for RPGs in the future
    2. The next generation of games will be heavily developed via AI; art, code, and story
    3. Your average gamer of the future will have no idea 1 or 2 are happening because it will be that well done. Once we are past the first few “Elder Scrolls 7 is using AI!” it will be a non-event for games to be doing it.

    Ultimately I see a lot of folks missing the forest for the trees because they asked ChatGPT/Co-pilot/Bard/etc some nuanced question it was never designed to answer and got a cute reply. There is an AI bubble as many companies will peak and crash, but underlying the bubble is technology that will shape the future.

    But if you are actually working with the technology and making it work, especially in the business perspective, you should 100% see the reality that this is going to fundamentally change things. Humans will still muck it up so its always slower and less ideal than it should be.

    Also, yes, big ChatGPTs of the world take a tremendous footprint to run, but you can also put the entirety of ChatGPT in your pocket with a curated dataset for your preferences that costs nothing more to run than the battery life of your phone. You will see a lot of the AI-on-the-device because the models can be jammed in pretty efficiently and many devices already have the chips needed. That is a business model that lots of folks are already comfortable with (i.e. software that runs on my device) vs the model of a “live service I have to pay for” (which lets be real; most people don’t like that unless its Netflix).

    And I’ll leave on this note. Your sentiment isn’t wrong. There is a lot of hype and a bubble is real. AI will replace human workers. AI companies will peak and crash. But AI will also generate a ton of new opportunity for workers in ways that we’ve never had before. It also, at a personal level for me, is making me much more productive in both work and personal endeavors. 

    I encourage anyone in tech to be learning, actually learning, how to make the current tools work for their context. You will be a better worker because of it. If you are in software dev of any sort you are going to be replaced by AI-assisted developers if you refuse to go along. And this isn’t sum slub frontline guy; I am high enough to tell you this is happening now.

    Good topic.

    Like

    Reply
    1. Wilhelm Arcturus Post author

      You dig into a couple of my unstated side-gripes about AI, which is it is being sold currently as a general technology solution for the future. 

      Selling the technology is cool for the VCs, but businesses pay for solutions… and won’t often pay extra for “its a furniture polish AND a desert topping” sorts of approaches. If you’re selling the tech and not a specific solution then I see a big red flag.

      That is what gets you things like that New York City chat bot that generates wrong answers repeatedly and consistently because nobody bothered to understand the problem, they just threw the new tech that promises to solve all possible problems at it, only to find out the its abilities were over sold.

      My current position as a product manager responsible for making internal apps has me between our Microsoft rep (because we’re all in on Azure and GitHub) telling me we need to use Co-Pilot or he’ll tell my boss or his boss or his boss or whoever that I am not being cooperative and the IT security team who are telling me not to use any AI generated code in our apps, including Co-Pilot, due to security concerns. 

      Fortunately the devs who work for me are really good and thorough and want to understand the code even when they have been shopping for ideas over at Stack Overflow, so I am not too worried about that. They know who gets called in the middle of the night when the app goes down… well, I get called first, but I know all their cell phone numbers. But I get an ear full of “this will solve all problems” from one side and “AI is Satan” from the other.

      Somewhere there is a chart about the cycle of technology that shows that up front hype cycle where it is suddenly going solve all problems, the disappointment when it doesn’t, which includes a lot of start ups that had no real plan going out of business, and then years down the road the introduction of actual practical application of the technology in a useful form. We’re still in that hype cycle and it is just grating on me after so many times through this that nobody can see it for what it is.

      My observations about the idea feeding naked greed of Wall Street though… that is a scary feedback loop.

      Like

    2. heartlessgamer

      Being part of the buying decisions in tech it is already the first question and the answer is more important every week as the pace of change in AI value is accelerating. You are correct in that “businesses pay for solutions” which is why the AIs that actually do things are going to be the first big winners.

      AI-assisted code development is just a no brainer. We are seeing developers with AI assistance create at a rate that is crazy to consider. Code optimization is in the hands of everyone now. Thinking you can do better than the AI simply won’t fly in the very near future. Coders will be mostly looking at refining AI generated code. I honestly can’t see it going any other way except for the most stubborn.

      We are also just on the precipice of AI models that are designed for code assistance/optimization. In the next year there is going to be more and more refined for specific languages and stacks; many of which are specifically targeting to address the security concern. I’d also argue if your code review methods / pen testing / etc are not up to snuff to catch security issues from AI code then that has nothing to do with AI and probably warrants a look at process.

      Note: I do not advocate throwing code onto a public/non secure AI chatbot but building a secure in house version is not difficult and relatively cheap.

      You are also referring to the Gartner hype cycle: https://en.wikipedia.org/wiki/Gartner_hype_cycle

      There are some arguing that we are already at the trough of disillusionment and many more arguing we are still at the peak of inflated expectations. Personally I think we are past peak and trough and are on the slope of enlightenment which I am sure puts in the minority.

      I think folks missed how much ChatGPT peaked originally and then crashed and with it expectations with many folks were set on models that were years old by the time they were playing around with them.

      There isn’t a week now that doesn’t go by where I’m not having “oh wow” moments with AI (AI being a very loose term and not sure we said that out loud; lots of stuff with the AI tagline). I’ve had things I thought were impossible a week ago that are now not only possible but already old news. Yes it all has to be made into a solution but that is becoming less and less of a problem every week. 

      I think the biggest break through in my thinking on AI is to think of it less as one giant AI solution and more of an agent model. AI agents that are aimed at and handle very specific things then combined together to work amongst themselves. Less is required for each agent to be successful and agents can work together to accomplish tasks. Underlying each is the same AI technology. Ironically just how your average business works to produce a widget :P

      And your fear of naked greed is not unwarranted but I hate to break it to you; that fear is there whether there is AI or not. Just a lot of us in the information sector get to worry a whole lot more about replacement the same as manual labor jobs have worried for the past 30ish years.

      With that said I am still optimistic that this technology can empower individuals as much, if not more, than what it can do for businesses which I think will be a net increase and may be the first time we see the productivity slide reverse.

      Like

    3. heartlessgamer

      And just to highlight how fast my personal/work world is changing. 

      I don’t use web search anymore; I am either asking MS co-pilot, my works GPT, or perplexity AI. I barely web surf now that I can let things like perplexity AI “browse for me”. I literally used a chat GPT to help with my responses to this very post!

      I am using our business analysts at work to do other things than just chase meeting notes and actions. People laugh “you posted meeting notes?” knowing traditionally I lazied my way out of posting any sort of meeting notes :P

      I took QA staff at work who are now QAing close to 100% of customer interactions because our work GPT can do the heavy lifting and feed them the fallout for action. We had to downsize previously so were only QAing very specific things on a fraction of a % of over all interactions. Now we just QA everything because you can feed it through a GPT because honestly its better than humans at actually checking 100% of the QA target. That took a couple weeks to make happen.

      Personal life I used ChatGPT to vet out our spring break vacation (wife is not letting that task up quite yet). I know others that have used it to give them an itinerary for visiting major cities they have spare time in.

      I dabble in game design, mainly thinking about CCG/TCGs. I’ve never really had art before for my cards. I can pump out amazing card art nonstop all day long now. Just look at my stupid AI images on my blog :P

      I still write a good bit of user stories for the team (SCRUM/agile) and 99% of them now get rewritten via AI that we’ve tuned to help provide clarity and specificity in our stack. We didn’t even tell devs/admins (still haven’t); they just pick up stories like any other work and we’ve cut down grooming a good bit because the starting point is just better.

      Am I more productive? That is subjective but I am taking on more work and doing odd tasks that need getting done that I wouldn’t have before. I may even sneak a log into my favorite game now and then :P

      Like

    4. Wilhelm Arcturus Post author

      My measure for VC hype is, as mentioned, VentureBeat and when 4 out of 5 pieces on that site are about AI, my gut says we have yet to hit the hype peak. We will be in the trough when they back burner it for some other new instant panacea.

      Like

  3. Tipa

    The company I work for is making a huge investment in AI. But they love buzzwords; they loved crypto, blockchain and probably NFTs in the past. (Not sure about NFTs, but they love jumping on the bandwagon, so…).

    Github Copilot is decent. It and other AI assistants are often very good and have transformed how I code.

    I used to ask AI to make up stories, but the stories were so terrible that even I could do better, so I don’t do much of that anymore.

    Training on specific data can cut down on hallucinations; I doubt the NYC chat bot was trained on the city and state rules, laws and regulations. If it HAD been, it might actually be useful. Training an LLM is a time consuming and expensive proposition that requires a fair amount of human shepherding. I’m sure NYC could do it, but it’s clear they didn’t.

    As far as AI NPCs, I keep thinking about No Man’s Sky. A procedurally generated galaxy. No two planets were exactly the same, but there was nothing special about any of them, ever. There is no benefit to having a vendor NPC think of a hundred different ways to say, “Those carrots will cost five platinum pieces”.

    AI is all about doing what they think you expect. They aren’t there to surprise you or to strongly advocate for a position. They can’t.

    Like

    Reply
    1. Wilhelm Arcturus Post author

      There are some parallels with blockchain beyond just the hype. Nobody needed blockchain for the solutions it promised. It didn’t do anything we couldn’t do before.

      Likewise, a generalist large language model, there aren’t a lot of solutions that need that much effort thrown at it. I have no doubt that the NYC chat bot was some vendor promising instant magic and the mayor or his advisors buying the BS and running with it despite the problems with every previous such attempt.

      I am sure you could go to the NYC chat bot and ask it to generate JavaScript code for a tic-tac-toe game and it would give you an answer.

      Like

    2. heartlessgamer

      Speaking of stories. My oldest’s first ChatGPT adventure was having it tell him a story and he was sort of “mind blown” that he could tell it to change the story to his liking. As terrible as the story may have been from an authors lens it was still a neat “tech dad” moment to see. Cracked me up at one point where he had to keep reminding it to “not forget the boy’s dog” in the story.

      In regards to procedural generation of planets in No Man’s Sky I do agree but that’s because there is a limit to procedural generation where it can’t at any point actually create something and is thus stuck generating a lot of “meh”. 

      The new generation AIs are actually creating things. That is how you can get it to give you an image of an alligator riding a bike where as procedural generation has to be given an alligator riding a bike first (or be given an alligator with a definition that it can ride a vehicle and that a bike is a possible vehicle) before it can procedural it onto your screen. The AI doesn’t need to be given either other than having them both exist in their dataset from which to create.

      Like

  4. PCRedbeard

    Tell me how this makes you feel…

    That middle one makes me think I ought to go listen to The Division Bell.

    Microsoft is going all in on AI, even in their security products. They’ve even had online presentations where among the first things they tell you is that “this is not just slapping a new interface on Chat GPT” or some such nonsense. I just wonder how long the BS will work for their clients.

    Like

    Reply
    1. heartlessgamer

      Having seen MS prices and knowing the cost of building our own I don’t see how MS is going to win here. Honestly they are going to have to make it part of their core offering to stay relevant. They won’t exist if they don’t. The buying decision on new system/software is already at requiring AI as table stakes.

      Companies are fearing corporate extinction levels of events being behind on AI and I think it’s warranted fear.

      Like

    2. PCRedbeard

      I just wish they’d stop saying “AI” when it’s merely better designed algorithms. Nobody called Google when it first rolled out an AI, but it was certainly a quantum leap in search capability, enough that if it happened now it’d be called “AI Search”.

      Like

    3. heartlessgamer

      I think its a disservice to not call them AI when that is what they are. They are not “merely better designed algorithms” as that entirely ignores the learning aspect. In my simplified view; if it is learning and reinforcing that learning with humans then AI is a proper term.

      In any tech cycle things will be mislabeled for marketing purposes.

      Like

    4. Wilhelm Arcturus Post author

      On the one hand, we have applied the term AI to ridiculously simple programs historically. Every video game that has a computer controlled player has an AI in industry parlance. So the bar seems well and truly cleared for even the dumbest thing being branded as AI.

      On the other hand we have had algorithmic learning for a long stretch. I was working on attempts to automate email responses with technology from Banter and YY Software over 20 years ago, and both used a learning/training methodology not dissimilar to what is being done now, save for having much smaller data sets. Two years of email questions sent to the Royal Auto Club was one such, a regretful story about which I will get to some point in my work tales. But if you had called either of those products AI I probably would have laughed.

      And both companies could not, in the end, deliver and disappeared into the endless cycle of acquisitions and failures that make up the valley.

      Like

    5. PCRedbeard

      It’s not learning in the same fashion that humans learn. It’s more akin to opening up a pipe and shoving a mountain of data in, looking for patterns and identifying critical points for identification purposes.

      If you ask Chat GPT why it exists, it tells you that

      “I exist to assist and interact with users like yourself. My purpose is to provide helpful responses, answer questions, generate text based on prompts, and engage in conversation on a wide range of topics. Additionally, I serve as a tool for exploration, learning, and creativity, aiming to facilitate various tasks and activities through natural language understanding and generation.”

      It reads like the Chat GPT version of The Catholic Catechism, only wordier. Or perhaps a better answer is that it only knows how to write an answer without providing an opinion, because when you ask it “Why do bad things happen to good people?” you get a six part response identifying the various schools of thought on that question. It doesn’t give you an opinion on an existential question, just a listing of what options there are. That leap from a listing to a choice –or even merely saying “I don’t know what the real answer is” is a clear delineation of true intelligence or conscious thought. It can’t create new data that is used as it’s own internal training data –it told me so itself– and the current design does not allow it, well, “creative freedom” to do it’s own thing. It can’t explore or write or draw or read or design of its own accord.

      I’m reminded of the so-called “cognitive leap” in humans that happened around 70,000 years ago (give or take 20,000 or so years), where we were able to go from merely being able to say what we saw (such as ‘the tiger is over here’) to creating things –stories and art– that we hadn’t seen. Companies such as OpenAI are trying to get as close as possible to that cognitive leap within machines without letting them take that leap themselves.

      Like

  5. Lewis Maskell

    AI is a tool, that like all tools is being misused.

    But where I work (healthcare) the consequences of misuse are literally life-threatening. And knowing how shoddy most healthcare IT software & implementation is …

    I fear the lawsuits.

    Like

    Reply
    1. Wilhelm Arcturus Post author

      I recall a study where they were training an AI with images and the positive results were all marked in a small way which the AI picked up on so that the initial result was that the AI learned that an orange dot on the corner of the picture meant the person was in the group being scanned for. We’re bad at training AI, which ends up in the results.

      Like

  6. Pallais

    On a personal level, I’ll feel like AI or LLM-based chatbots have arrived when I can use an IRS-created one to get answers to complicated tax questions that will pass IRS auditor and/or Tax court muster. Until then, yeah, its in the hype cycle and until it can give accurate answers to factual questions it isn’t worth getting excited over.

    Like

    Reply
  7. Anonymous

    I’ve been reading since not long after you started. I check every day and enjoy most of your content (eve economy isn’t for me). This could be your best post ever and I’m sharing to tons of people. Well done sir, I’m always appreciative of your free content, but this was just …chef’s kiss.

    thank you

    Like

    Reply
  8. MagiWasTaken

    Great post, Wilhelm! Really enjoyed your writing in this one and the points you raised are really resonating with me!

    To add an example of what you talked about: The Anime-industry in Japan is booming with lots of different boards funding way too many adaptations each “season” (three-month cycles in which new anime airs) that all need to be animated. They make all the money and simply hire the studios that do the work… and because a lot of them have to take multiple projects at once per season, the general work environment for animators is horrible. Poor working conditions and a high turn-over rate is the general tenor. Animators are overworked and underpaid… but since there are so many people burning out (if not even killing themselves) in the industry, AI has been used more and more.

    …which is a bad thing. Instead of improving the working conditions for animators, the big-wigs at different boards and studios are deploying AI to create backgrounds, colouring, shading, etc. The composing work that really brings a show to life can now be done by a machine that is much much cheaper than an actual GFX-trained person.

    There have been a few boycotts of anime produced by Netflix that used AI, for instance, as well as other shows produced by other studios… but the industry is so big with a vast majority of people not really caring about the ethics behind the shows they’re consuming that it doesn’t matter really what a few fans of human-made art think.

    So, it will only get worse over time.

    When it comes to AI in our day-to-day lives, I’ve noticed more and more how many people in my university courses are overly relying on language models like ChatGPT to create shitty papers or presentations. I’ve had it a few times where people in my groups literally would just copy + paste what ChatGPT told them onto a presentation or their notes without ever reading through it.

    Some of the sentences don’t make sense in the context of the topic they’re talking about, some aren’t factually correct whatsoever, but they rely on it so much that they never think about anything they do.

    AI is being oversold so much that people just believe everything it shits out without ever questioning anything about it. Critical thinking was bad enough for folks, as is, but this just adds another layer. Not to mention the amount of folks that just want to “consume” content shat out by the machines and who try to defend AI NPCs in games and AI-generated artwork or stories.

    Liked by 1 person

    Reply

Voice your opinion... but be nice about it...