- Asymmetric Investing
- Posts
- Apple Strikes Again
Apple Strikes Again
Apple's AI may be "good enough" to destroy a lot of competitors.
A lot of artificial intelligence startups died this week.
Apple introduced Apple Intelligence, which includes image and text generation along with the ability to search and understand a lot of content housed on your personal device.
Image generation and text prompts aren’t new, but Apple is doing most of the AI computing on-device, negating the need for the cloud computing that powers most AI tools today.
When combined with the generalizability of AI that I talked about a few weeks ago, I have to wonder if Apple will suck the oxygen from the room of the AI ecosystem.
Asymmetric Investing has a freemium business model. Sign up for premium here to skip ads and get double the content, including all portfolio additions.
Unlock the full potential of your workday with cutting-edge AI strategies and actionable insights, empowering you to achieve unparalleled excellence in the future of work. Download the free guide today!
Artificial Intelligence & On-Device Compute
The common narrative in AI has been that NVIDIA is the clear winner and will grow as quickly as it can make chips. Cloud infrastructure from Google, Meta, Amazon, Microsoft, Snowflake, Databricks, and others is the next “obvious” play in AI.
I think Apple proved that wrong this week by moving the most impressive new AI features on-device.
This is from an interview Ben Thompson did with Daniel Gross and Nat Friedman that was published today.
I don’t fully understand and I never fully have understood why local models can’t get really, really good, and I think that the reason often people don’t like hearing that is there’s not enough epistemic humility around how simple most of what we do is, from a caloric energy perspective, and why you couldn’t have a local model that does a lot of that. A human, I think, at rest is consuming like 100 watts maybe and an iPhone is using, I don’t know, 10 watts, but your MacBook is probably using 80 watts. Anyway, it’s within achievable confines to create something that has whatever the human level ability is, it’s synthesizing information on a local model.
What I don’t really know how to think about is what that means for the broader AI market, because at least as of now we obviously don’t fully believe that. We’re building all of this complicated data center capacity and we’re doing a lot of things in the cloud which is in cognitive dissonance with this idea that local models can get really good. The economy is built around the intelligence of the mean, not the median. Most of the labor is being done that is fairly simple tasks, and I’ve yet to see any kind of mathematical refutation that local models can’t get really good. You still may want cloud models for a bunch of other reasons, and there’s still a lot of very high-end, high-complexity work that you’re going to want a cloud model for, chemistry, physics, biology, maybe even doing your tax return, but for basic stuff like knowing how to use your iPhone and summarizing web results, I basically don’t understand why local models can’t get really good.
If 90% of AI goes local, what are all the data center GPUs for?
I’m not suggesting data center GPUs are not necessary, but rather wondering if the growth rate of data center investment will slow as compute moves on-device.
Apple Generalizes…Everything?
When Apple introduces a new product or capability it doesn’t typically start with the most advanced features. It starts relatively basic with use cases people will use and builds up to eat more and more of the ecosystem.
The original iPhone didn’t have an app store.
Then there was an app for that.
Then there were Apple Apps for that.
I dove into why I think the generalizability of AI will lead to a lot of value destruction here. As far as Apple goes, I think we are seeing the early phases of Apple eating more of the AI ecosystem and Apple may have even more power in AI than apps because of how generalizable a model can be.
Who needs Grammarly when spelling and grammar checking is on-device for free?
The same can be said for Duolingo.
Even Adobe Photoshop and Canva should be nervous.
If an on-device, generalizable model can do 90% of AI tasks, what are we building all of this cloud computing and specialized capability for?
More Questions Than Answers
AI is evolving quickly and we know very little about what the market will look like long-term.
What I think we know is:
AI models themselves are becoming a commodity/open-source
AI computing will move on-device
Applications and models are becoming more generalizable
Ex. LLMs started as text only and now include image, video, and language capability
As an investor, I think that leads to more value destruction than value creation…at least over the next few years.
Apple has a major point of leverage as the device hundreds of millions of people interact with every day.
Google has points of leverage with multiple products that have billions of users and the other mobile ecosystem (Android).
Those two companies seem to be winners in AI, whether there’s incremental revenue from AI or not.
Everyone else may be on shaky ground as AI is commoditized, moves on-device, and becomes generalizable.
Disclaimer: Asymmetric Investing provides analysis and research but DOES NOT provide individual financial advice. Travis Hoium may have a position in some of the stocks mentioned. All content is for informational purposes only. Asymmetric Investing is not a registered investment, legal, or tax advisor or a broker/dealer. Trading any asset involves risk and could result in significant capital losses. Please, do your own research before acquiring stocks.
Reply