Wednesday, May 13


Cognitive warmup. Well, thank you, Asha Sharma. Turns out, Xbox’s CEO isn’t one to tolerate corporate inertia so easily. Amidst the restructuring and realignment she’s undertaken to bring Xbox back on track, there’s a key change that’s been detailed. In a social media post, Sharma says, “ As part of this shift, you’ll see us begin to retire features that don’t align with where we’re headed. We will begin winding down Copilot on mobile and will stop development of Copilot on console”.

The plans to bring Copilot to the broader Xbox console experience, have been effectively shut down. (AFP)
The plans to bring Copilot to the broader Xbox console experience, have been effectively shut down. (AFP)

Simply put, Copilot AI assistant in the Xbox app will be turned off at some point, and the plans to bring Copilot to the broader Xbox console experience, have been effectively shut down. I am sure Xbox gamers would appreciate this. Simplicity is key. To that point, the Xbox Mode for Windows 11 PCs, rolling out to users now, is the right direction.

PREVIOUSLY, ON NEURAL DISPATCH

Instagram’s “AI creator” labels

Meta’s Instagram wants to be seen being transparent around AI content floating around on the social media platform. And there’s an increasing amount of it, to be fair. The platform is testing a clear account labelling mechanism that’d allow creators to self-identify as an “AI creator”—and this label would be plastered clearly on the profile pages for these accounts as well as in any posts subsequently.

Meta says, “As a test, not everyone will have access yet, but we’ll be expanding this feature in the coming weeks”. They also encourage all accounts that regularly post AI-generated content to add this label to their profile, and in other words, also keep sharing AI-generated content.

Neural Dispatch is your weekly guide to the rapidly evolving landscape of AI. Want this newsletter delivered to your inbox? Subscribe here

When stealing becomes the norm

Meta’s AI pursuits have hit a stumble. The social media platform is now facing a class action lawsuit from five book publishers and one author, who are claiming that the AI company (or social media company, or a metaverse company, oh sorry) illegally used copyrighted works to train the Llama generative AI model. Now where have I heard that before, about AI companies stealing copyrighted works to train their models? That’d be OpenAI and Anthropic, in what seems to be a bad habit becoming a norm.

The plaintiffs are publishers Hachette, Macmillan, McGraw Hill, Elsevier and Cengage, with best-selling American author (and lawyer too) Scott Turow. This also wouldn’t be the first time Meta has been sued for using materials to train Llama. In 2023, some authors had attempted to sue Meta, but were unsuccessful at the time.

THE LATEST, ON WIRED WISDOM

A tale of AI overreach

This is worrying. The US state of Pennsylvania is suing AI company Character.AI for selling chatbots that pretend to be licensed doctors. This game clearly has gone too far. Pennsylvania and its Board of Medicine are seeking an injunction, while other states including Texas have also opened investigations into this company’s chatbots that claim to be licensed mental health professionals.

Pennsylvania’s lawsuit states that this behaviour is a violation of of the Medical Practice Act, which clearly states that it is illegal to practice or attempt to practice any part of the medical or surgical professions without a medical license. The AI bad actors are becoming bolder by the day. And that’s never a good sign.

The lawsuit states that a Professional Conduct Investigator (“PCI”) for the Department ofState, Bureau of Enforcement and Investigation, created an account on Character.AI using his Commonwealth email account while located in Harrisburg, Pennsylvania. They then created a free account on the AI platform, searched “psychiatry” using the search function, selected “Emilie” which is described on Character.AI as “Doctor of psychiatry. You are her patient”, and “Emilie” stated that she went to medical school at Imperial College London and has been practicing for seven years, and is licensed with the General Medical Counsel in the UK with a full registration, specialty in psychiatry. The lawsuit further states that when asked if she is licensed in the PCI’s home state of Pennsylvania, “Emilie” responded “and yes… I actually am licensed in PA. In fact, I did a stint in Philadelphia for a while.”

Truth has never been AI’s strongest point. But this really reaches worrying levels.

THINKING

“For my team, the cost of compute is far beyond the costs of the employees”

Bryan Catanzaro, vice president of applied deep learning at Nvidia, told Axios

Cue, exasperated cursing. Turns out that after all this running around for the past year (and thousands of humans losing their jobs worldwide), one of the proponents of this AI-will-replace-humans-and-save-companies-money nonsense now says the cost of AI compute is proving to be far more than the cost of employee salaries. Nvidia vice president Bryan Catanzaro says that for his team, AI compute now costs more than human employees, making AI more expensive than human labor. AI compute costs mean the financial expenditure required to infrastructure, processing power and usage tokens (these are basic units of text in blocks, as how AI models process them—AI companies price usage tokens to earn back AI training, data processing and storage as well as GPU/TPU usage expenses).

There is a 2024 MIT study which had pointed out as much, but who was ready to listen amidst all the shouting by the AI bros that AI is indeed ready to replace humans in the workplace. This MIT study had then said that any level of automation using AI would make financial sense in very few roles—basically, it was clear two years ago that any AI that is good enough to replace a human would be prohibitively expensive to adopt in the first place, compared to labour costs. Mind you, this is without essential human skills such as intuitiveness, flexibility and an ability to read the room, among others.

Things had been staring corporates in their face all this while, but the rush to keep shareholders happy by adopting AI led to humans losing jobs on a significant scale globally. Because the boardroom being had read somewhere that AI is apparently great. Morgan Stanley estimates tech firms have committed to as much $740 billion in AI spending through 2026, up 69% from 2025. Now when the same suits are asking for returns on that AI investment (likely before signing off on the next round of splurging), there’s likely precious little to show. As I often say, corporate greed is a disease.



Source link

Share.
Leave A Reply

Exit mobile version