What is Google I/O?

Google I/O is a conference hosted by and about Google. While the main conference is taking place in the US in San Francisco, Google started to host the Google I/O Connect conferences in 2023. Those are taking place in multiple places in the world and have the same setup and topics as the main conference, scaled down a bit.

I/O Connect Berlin

The agenda is accessible beforehand, so you can plan your day. Just like the main conference, there are four core topics this year:

  • AI
  • Web
  • Mobile
  • Cloud

The meeting was opened by Tim Messerschmidt giving some impressive numbers. 1300 people from over 100+ EMEA countries were present in Berlin. Afterwards, various speakers gave a quick overview of the most important topics and innovations, especially in the area of AI. The talks after the opening remarks are taking place in parallel, so you have to make a selection. I visited the AI and the Web stage.

Key Takeaways

AI

The topics covered fit into the bigger picture of AI changing the way we will develop in the future, but also show that there is quite some way to go. While Google advertises its cloud services and the power of their Gemini models, the Chrome team points out that a model running on the device has the benefit of of lower cost for both provider and end user (regarding money and loading times/latency) while offering the possibility of GDPR conformity. Because it runs on the device only without sending data anywhere. Google has a Gemini model for it, Gemini Nano, which is also integrated into Chrome and the Chrome Dev Tools.

Several presentations showed, that AI will be integrated into more and more products as a supportive agent rather than “replacing developers”. VScode, IntelliJ and others will see Gemini integration, Google Cloud Run will use AI to support you in simple Dev Ops tasks to name a few. While announcing a lot of AI integrations and products utilizing AI, you can also see a trend in resource usage awareness. Not for environmental, but for monetary reasons. Gemini Flash and Pro are extended with context caching – parts of the prompts that do not change can be cached to save money. Gemma 2 was announced with the key features of being more efficient and safe and needing only one single GPU.

For the use of AI models, one proposal is a hybrid approach. Download the model and run on device if the device can do it, otherwise run on the server. An intriguing thought but at the time of writing, the vast majority of mobile devices is not able to run such models at all or reasonably fast enough to provide a real benefit to the end user. But hardware development has already taken the steps so that future devices will be capabable of doing so.

AI’s hunger for resources also has effects beyond the virtual world. It is not hot news, but Google is investing a lot in cloud infrastructure in Europe and Africa as well as even building the first ever under sea fiber-optic cable to connect Africa and Australia.

Web

AI is only as good as the experience built around them“. This quote from one of today’s sessions at the web stage, combined with “focus on the user and all else will follow” from Google’s “ten things we know to be true” are the frame for some of the key takeaways from the talks. During the last year, many new features have arrived in the browsers to provide your users with a better experience. But: many of the announced features are not yet implemented by Firefox and Safari, therefore you should treat this information with attention but not overhype it.

The Speculation Rules API lets you prerender whole pages by defining a json of “speculationrules” in a script tag. The View Transition API aims at providing seamless viusal transitions between different views. Combined with speculation rules, navigation can then feel “almost instant”. One of the more established new APIs is the Document Picture-in-Picture API, which powers the little floating player of Spotify. Last but not least, the developer documentation on scroll-driven animations is worth a read for anyone interested in a good UX when scrolling down a page – a thing that happens quite often in the vertical world of the mobile web.

All the new APIs and (incoming) native support for features do not only aid the developer experience but also the end user, because less lines of CSS/JS are needed and thus the load for shipping good UX is decreased. Yet, not all browsers and (naturally) older versions do implement those features. With the recent events around polyfill.io, one has to wonder how to safely implement the new features. A simple, yet effective advice to deal with this was given by the Google experts: host polyfills yourself and check beforehand what you host. I know many developers who simply use polyfills without thinking about it. If it is worth the effort you can also consider checking, if the use of a new API and the polyfill outperforms self written code.

Follow Up

A lot more was presented at I/O Connect, and it is too much to cover in one blog post. Use below list to dive into specific topics: