What’s Ahead for Apple and LLMs in 2023

According to a study by StatCounter, Google accounts for 92.06% of all search queries worldwide, while Siri accounts for only 2.19%.

Bard uses cloud LLM, should Apple bother building foundational models?

Apple has been relatively quiet about its work on large language models (LLMs), but there are signs that the company is making significant progress in this area. In 2021, Apple acquired the AI startup Nuance Communications, which specializes in speech recognition and natural language processing. Apple has also been hiring engineers with experience in LLMs, and it has been granted several patents related to this technology.

So what does the future hold for Apple and LLMs? Here are a few possibilities:

  • Apple could use LLMs to improve and power Siri, to make the App Store more personalized, or to improve the accuracy of Apple Maps. They have a lot to catch up on in all 3 areas.
  • Apple could use LLMs to help developers with app development. For example, new virtual assistant APIs could enable app developers to embed autopilot in apps, natively.
  • Apple could license its LLM technology to other companies. This would allow other companies to benefit from Apple’s research and development, and it would generate revenue for Apple.

It’s still too early to say exactly what Apple plans to do with LLMs, but it’s clear that the company is taking this technology seriously. LLMs have the potential to revolutionize the way we interact with computers (phones), and Apple (and Google) are arguably two companies that are best-positioned to lead the way due to their distribution

In addition to the possibilities listed above, it’s also worth considering how Facebook (Meta) and other application developers could benefit from Apple’s efforts to build LLMs and make them work on device (and not in the cloud).

For example, Facebook could use LLMs to improve the performance of its search engine, to make its advertising more targeted, or to develop new features for its social media platforms.
After all, meta apps, instagram and whatsapp are the native applications with millions of DAUs. Other application developers could use LLMs to create new kinds of games — think generative content, to develop new educational tools, or to improve the way we interact with our devices — new text based interfaces.

Native, on device LLMs should be and will be the focus of Apple’s LLMs. This means that Apple’s LLMs will be processed locally on the device, rather than in the cloud. This has several advantages, including:

  • Privacy: When LLMs are run on device, all of the data is processed locally. This means that Apple cannot collect or share any of the data that is used to train or run the LLMs.
  • Speed: LLMs that are run on device are much faster than those that are run in the cloud. This is because there is no need to send data back and forth between the device and the cloud.
  • Cost: this is rather more important as every click to generate money costs significantly more(than retrival) — cost of local LLMs is, for the most part is battery drain and it’s big. Apple’s “M” chip play could turn out to be a huge win here.
  • Reliability : LLMs that are run on device are more reliable (wrt to connectivity) than those that are run in the cloud. This is because there is no need to worry about network outages or other disruptions. Note- this is not about accuracy

Apple’s focus on privacy bit is likely to appeal to users who are concerned about their data privacy and who want a fast and reliable experience who hesitate to ask questions that are “too personal” to be asked to ChatGPT or Google’s Bard.

In addition to the advantages mentioned above, on-device LLMs also have the potential to improve the user experience in several ways. For example, on-device LLMs can be used to provide more personalized recommendations, to generate more accurate translations, and to answer questions more quickly.

The interesting and strategic questions that I will be curious with Apple’s first LLM release are

  1. At what layer apple will expose the native LLMs ?
    API for app developers or Siri only? I belive, Siri comes first.
  2. If apples exposes their LLM’s through API, what would be the strategy of Google, Bing, Twitter who all are investing in cloud LLMs?
  3. Monetization is not a big question here but I the investment has to be justified somehow. I guess using LLMs on iPhone is included in 30% app store cut 🙂
  4. How will Apple regulatie what queries can pass through? How developers can build their policy layer?
  5. What is the process / capability for developers to fine-tune the models — and on-devce? 🙂
  6. Data engine for Apple’s LLMs — ost use Google to search (might default to bard eventually) This limits Apple’s data team that can actully improve with RLHF

Leave a Reply

Your email address will not be published. Required fields are marked *