Saturday, March 21


Goldman Sachs is integrating AI deeply into its operations, aiming to revolutionize client onboarding, risk management, and more with a new operating model. While CEO David Solomon expresses optimism about AI’s transformative potential and the bank’s strong financial performance, he also candidly warns shareholders about risks like incorrect outputs, data breaches, and external dependencies.

Goldman Sachs wants its shareholders to know two things about AI: it’s central to the firm’s future, and it can go wrong in ways that matter. In its 2025 annual report, released Friday, the bank laid out both its AI ambitions and a candid set of warnings about the technology it is betting on. CEO David Solomon struck a bullish tone overall—net revenues rose 9% year-over-year to $58.3 billion, EPS grew 27% to $51.32, and return on equity improved 230 basis points to 15%. But the AI section of the letter was notably measured.

Goldman is applying AI across six operational areas it calls ‘ripe for disruption’

The bank announced what it is calling One Goldman Sachs 3.0—a new operating model built around AI. The six workstreams it is targeting first are client onboarding and KYC, vendor management, regulatory reporting, lending, enterprise risk management, and sales enablement. The ambition, per the letter, isn’t just platform upgrades. It’s a front-to-back rethink of how the firm organises people, makes decisions, and thinks about productivity and resilience.Goldman has also deployed its GS AI chatbot across all 47,000-plus employees and partnered with Cognition Labs to build bespoke tools.

But the bank is also warning investors about what AI can get wrong

The firm’s risk disclosures are where the letter gets specific. It flags that generative AI models can produce incorrect outputs—and in the worst case, that could mean the release of private, confidential, or proprietary information, or outputs shaped by biases baked into training data.The bank also notes its reliance on third-party AI developers, which creates a dependency on how those providers build and update their models—a dependency Goldman doesn’t fully control. And on the threat side, it acknowledges that bad actors could use AI capabilities to commit fraud, misappropriate funds, or launch cyberattacks.The legal and regulatory landscape around AI, the letter notes, remains uncertain and fast-moving.Solomon still closed with optimism. AI, he wrote, will reshape how people live and work—but the speed of its adoption raises significant questions, and there will be winners and losers. Goldman, for now, is moving fast enough to want to be in the first group, while being careful enough to tell shareholders what could still go sideways.



Source link

Share.
Leave A Reply

Exit mobile version