Monday, 9 Feb 2026
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > AI > The ‘strawberrry’ problem: How to overcome AI’s limitations
AI

The ‘strawberrry’ problem: How to overcome AI’s limitations

Last updated: October 12, 2024 9:32 pm
Published October 12, 2024
Share
The 'strawberrry' problem: How to overcome AI's limitations
SHARE

Be part of our every day and weekly newsletters for the newest updates and unique content material on industry-leading AI protection. Study Extra


By now, giant language fashions (LLMs) like ChatGPT and Claude have grow to be an on a regular basis phrase throughout the globe. Many individuals have began worrying that AI is coming for their jobs, so it’s ironic to see nearly all LLM-based programs flounder at an easy process: Counting the variety of “r”s within the phrase “strawberry.” They aren’t solely failing on the alphabet “r”; different examples embrace counting “m”s in “mammal”, and “p”s in “hippopotamus.” On this article, I’ll break down the explanation for these failures and supply a easy workaround.

LLMs are highly effective AI programs educated on huge quantities of textual content to know and generate human-like language. They excel at duties like answering questions, translating languages, summarizing content material and even producing inventive writing by predicting and establishing coherent responses based mostly on the enter they obtain. LLMs are designed to acknowledge patterns in textual content, which permits them to deal with a variety of language-related duties with spectacular accuracy.

Regardless of their prowess, failing at counting the variety of “r”s within the phrase “strawberry” is a reminder that LLMs should not able to “considering” like people. They don’t course of the data we feed them like a human would.

Dialog with ChatGPT and Claude concerning the variety of “r”s in strawberry.

Nearly all the present excessive efficiency LLMs are constructed on transformers. This deep studying structure doesn’t instantly ingest textual content as their enter. They use a course of referred to as tokenization, which transforms the textual content into numerical representations, or tokens. Some tokens may be full phrases (like “monkey”), whereas others may very well be components of a phrase (like “mon” and “key”). Every token is sort of a code that the mannequin understands. By breaking every part down into tokens, the mannequin can higher predict the subsequent token in a sentence. 

See also  Elon Musk’s xAI raises $6B to take on OpenAI

LLMs don’t memorize phrases; they attempt to perceive how these tokens match collectively in several methods, making them good at guessing what comes subsequent. Within the case of the phrase “hippopotamus,” the mannequin may see the tokens of letters “hip,” “pop,” “o” and “tamus”, and never know that the phrase “hippopotamus” is made from the letters — “h”, “i”, “p”, “p”, “o”, “p”, “o”, “t”, “a”, “m”, “u”, “s”.

A mannequin structure that may instantly take a look at particular person letters with out tokenizing them might doubtlessly not have this downside, however for immediately’s transformer architectures, it’s not computationally possible.

Additional, how LLMs generate output textual content: They predict what the subsequent phrase will likely be based mostly on the earlier enter and output tokens. Whereas this works for producing contextually conscious human-like textual content, it’s not appropriate for easy duties like counting letters. When requested to reply the variety of “r”s within the phrase “strawberry”, LLMs are purely predicting the reply based mostly on the construction of the enter sentence.

Right here’s a workaround

Whereas LLMs won’t be capable to “suppose” or logically cause, they’re adept at understanding structured textual content. A splendid instance of structured textual content is laptop code, of many many programming languages. If we ask ChatGPT to make use of Python to depend the variety of “r”s in “strawberry”, it’ll more than likely get the right reply. When there’s a want for LLMs to do counting or another process that will require logical reasoning or arithmetic computation, the broader software program will be designed such that the prompts embrace asking the LLM to make use of a programming language to course of the enter question.

See also  Zencoder launches Zen Agents, ushering in a new era of team-based AI for software development

Conclusion

A easy letter counting experiment exposes a basic limitation of LLMs like ChatGPT and Claude. Regardless of their spectacular capabilities in producing human-like textual content, writing code and answering any query thrown at them, these AI fashions can not but “suppose” like a human. The experiment exhibits the fashions for what they’re, sample matching predictive algorithms, and never “intelligence” able to understanding or reasoning. Nevertheless, having a previous information of what sort of prompts work effectively can alleviate the issue to some extent. As the mixing of AI in our lives will increase, recognizing its limitations is essential for accountable utilization and real looking expectations of those fashions.

 Chinmay Jog is a senior machine studying engineer at Pangiam.


Source link
TAGGED: AIs, limitations, Overcome, problem, strawberrry
Share This Article
Twitter Email Copy Link Print
Previous Article A picture of the Deebot X2 Omni. Hackers took over robovacs to chase pets and yell slurs
Next Article Panduit launches next gen wire basket cabling solution Panduit launches next gen wire basket cabling solution
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Rethinking data centre power for a sustainable AI future

Ben Pritchard, CEO of AVK, lays out how microgrids, future-proofed fuels and nearer utility-community ties…

May 3, 2025

Nvidia targets data center with new servers, AI software

Extra Nvidia information from SIGGRAPH The RTX Professional server wasn’t the one information on the…

August 16, 2025

Motivity Receives $27M Growth Investment from Five Elms Capital

Motivity, a Honolulu primarily based supplier of scientific SaaS options for Utilized Habits Evaluation (ABA)…

March 12, 2025

Fabric Cryptography Raises $33M in Series A Funding

Fabric Cryptography, a San Francisco, CA-based verifiable processing unit (VPU) startup, raised $33m in Sequence…

August 20, 2024

A Clean Slate for Crypto: Kamala Harris’ Regulatory Opportunity

Photograph by Shubham Dhage on Unsplash Because the cryptocurrency panorama continues to evolve, the necessity…

August 1, 2024

You Might Also Like

SuperCool review: Evaluating the reality of autonomous creation
AI

SuperCool review: Evaluating the reality of autonomous creation

By saad
Top 7 best AI penetration testing companies in 2026
AI

Top 7 best AI penetration testing companies in 2026

By saad
Intuit, Uber, and State Farm trial AI agents inside enterprise workflows
AI

Intuit, Uber, and State Farm trial enterprise AI agents

By saad
How separating logic and search boosts AI agent scalability
AI

How separating logic and search boosts AI agent scalability

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.