Why Google’s Gemini and Pixie Assistant needs to be ubiquitous

Earlier this year, it seemed like OpenAI’s meteoric rise with chatbots like ChatGPT spelled trouble for Google in the AI race. Google had been plagued by infighting between its AI research teams, while OpenAI captured the public’s imagination.

But Google has since taken steps to resolve internal tensions over AI priorities. It is also going on the offensive with its own new natural language model called Gemini. Embedded in Google’s existing search infrastructure, Gemini aims to power more conversational and contextual search experiences.

Today, closely looking at the information peice (https://www.theinformation.com/articles/how-google-got-back-on-its-feet-in-ai-race) it turns out that it was the easy part.

Fully rolling out Gemini poses challenges. If Gemini capabilities remain exclusive to Google’s Pixel phones at first, that risks alienating the broader Android ecosystem. Developers may view Gemini as just another fragmented Google product rather than a platform to build on.

Historically, successful platforms thrive by attracting broad developer ecosystems. (Case in point Microsoft windows, and android itself). If developers perceive Gemini access as being too narrow or strategically limited by Google, they may abstract it behind middleware layers. This middleware approach has enabled cross-platform developer tooling to flourish despite underlying fragmentation in areas like mobile. OpenAI and it’s API is a classic case.

For Gemini to genuinely compete as an AI platform, Google may need to make it more ubiquitously accessible. That likely includes making conversational Gemini features broadly available across Google’s existing digital properties and third-party integrations. Google will also need to convince developers that Gemini represents an opportunity not a threat.

Leave a Reply

Your email address will not be published. Required fields are marked *