As promised, in this episode I’m going to go over the rest of the exciting announcements Google made at their Google I/O conference a few weeks ago. 

As I said last week, the bulk of this conference was centered around AI and they actually revealed 8 new AI features that will be coming to the Google ecosystem soon.

So let’s go through them, one by one, and talk about how they might apply to our lives and businesses.

Looking for more tips and a community of like-minded peers? Join The Unconventional RD Facebook Community on Facebook.

Need help setting up your website? Join our FREE “How to Start a Website” tutorial.

#1) New “Help Me Write” feature in Gmail

Google is rolling out a new feature for Gmail called “Help Me Write”. 

It’s basically AI-assisted email writing, natively within Gmail, so that you can prompt AI to write email responses for you. 

And based on their examples at the conference, it seems pretty sophisticated. 

The AI can read any given email chain for context and create responses that will likely require only minor editing before sending. 

There are also these quick buttons you can press to quickly modify the tone/style of the response. For example, if you thought the initial response AI wrote for you was too casual or brief, you can click a button to formalize or elaborate on that response. 

They gave some examples of this in action that were pretty cool. 

The one I liked the most was an example of a group email chain you have with your family, trying to plan a potluck. 

Your family created a Google sheet where everyone wrote down what they were bringing and that sheet was linked to at some point in the email conversation chain. 

Perhaps someone asked a question about what was still needed and you really don’t want to open up the spreadsheet and summarize everything yourself.

Well, now you can ask AI to summarize which dishes are already being brought. The AI tool will notice the google sheet link in your email chain, analyze it, then write a quick response listing out all the main dishes that are being brought to the potluck. It will even make a private notation for you, referencing that google sheet as its information source for its answer.

If you liked the AI-generated response, you can quickly add it to your email and send it to the family. 

This feature is rolling out soon to Gmail users, but you can also add yourself to the waitlist at and sign up for the Google Workspace waitlist.

#2) Immersive Street Views for Google Maps Routes

This next feature was an interesting one… I’m not sure if it will really stick and be useful or if its just a cool experiment, but essentially, for some cities across the world, you will be able to view your Google Maps route in an immersive view, if you want to.

We already have been able to see immersive views of locations around the world inside google maps. (Like when you Google the Eiffel Tower and want to see the area in an immersive view, rather than just a 2D image.)

But now, Google is working on having those same immersive capabilities for your directions routes as well. 

For example, maybe you were thinking about riding your bike to a new location in your city. You can enter that destination into Google maps and lets pretend it gives you two suggested routes. 

Since you’re biking, you’d like to take the more scenic route, so you can enter the immersive view within Google Maps and get a real-life feel for what it will be like to take that route. 

It sort of zooms in and virtually follows the route as if you were flying along the route in a drone or something. This allows you to see which view is prettier, has more nature, has more protected bike lanes, etc. 

It can also super impose the weather and traffic predictions in the view to help you make a decision as well.

This feature is going to roll out this summer and will be live for 15 cities by the end of the year, including NY and SF

#3) New AI Photo Editing Tools

Google is coming out with new AI photo editing tools with an upcoming tool called Magic Editor.

Now, in addition to doing things like removing distractions from photos, you can edit in even more amazing ways.

They gave a few examples to highlight what this tech can do:

#1) Imagine a photo of yourself at a waterfall hike, holding your hand out in front of the waterfall so that it looks like the water is splashing into your hand. 

You are semi-happy with the photo, but feel like it could be better…

With the AI editor, you can make some corrections to the photo to make it look exactly how you want. 


  • Remove the bag strap that is over your shoulder in the photo.
  • Remove some clouds and brighten the sky (and the overall lighting in the photo will automatically adjust.) 
  • Perhaps you posed slightly off and your hand wasn’t exactly underneath the waterfall like you hoped. No worries, you can highlight your whole body and move it left or right within the photo until it is positioned how you like. 

Another example they shared was a boy sitting on a bench, holding a large bundle of balloons. However, the wind blew and the balloons blew to the left so that the top of the balloon bunch was slightly cut off out of the frame. 

The boy on the bench was also a little too far to the left in the frame, you would have preferred him to be more centered in the photo.

With the Magic Editor tool, you can highlight the boy, the bench, and the balloons and drag that whole thing to the right. The AI will automatically move everything you highlighted to the right within the photo and then fill in what should be in the blank space you created. 

For example, it will expand the length of the bench for you and autogenerate the missing tops of the balloons that were out of frame initially, but that should be included when you moved that object to the right within the photo.

So cool!

#4) Updates to Bard

Bard is essentially Google’s version of ChatGPT. 

It used to be accessible via waitlist only, but as of May 10th, anyone can access it directly at

It looks very similar to ChatGPT, where it’s a blank window and you can type in prompts and ask questions and receive AI generated responses. 

Bard IS connected to the internet, so it has the most up-to-date information possible.

Bard currently runs on Google’s language model called PaLM 2, but they are working on an even more advanced large language model, called Gemini, which Bard will use once its ready for release.

Google talked a lot about how Bard really excels at coding and how developers can use it to help write, troubleshoot, and correct code.

One of the most exciting announcements is that you can now EXPORT Bard responses directly into Google Docs or Google Sheets!

They gave a really really cool example of using Bard to help research possible college options and then export your findings into Google sheets so that you can share your college research with your parents. 

For example, you could start by telling Bard what you’re interested in and it will give you suggestions for what types of programs you could attend. 

Let’s pretend you picked animation programs from their suggestions. 

Then you can ask it to find colleges with animation programs located within Pennsylvania. It will give you a list. 

You can then ask Bard to display these colleges on a map and it will pull up info from Google Maps to show you where all the colleges are located.

Then you can ask it to format the information as a table. It will show you the names of the universities, the location, and the name of the degree offered all organized in a nice table.

Then you can ask it to add a column to the table, indicating whether each school is a public or private university.

Then you can click a button to export the table directly into Google Sheets where you can share it with your family.

They will also be bringing Google Lens directly into Bard so that you can upload an image and interact with Bard about it.

For example, you could upload an image that you are planning on posting on social media and ask it to write a caption for you.

And in the future, more tools and apps will be able to integrate directly with Bard as well. For example, Redfin, Spotify, YouTube, Google Calendar, Walmart, Trip Advisor,  Instacart, and Khan Academy are all working on Bard integrations. 

They did not go into specifics on what these tools might do, but it doesn’t take much to imagine a future where you can ask Bard to, for example, create a custom meal plan for you and then order the groceries for you through Instacart.

Bard will also integrate with Adobe Firefly so that you can create your own AI images directly within Bard based on your text prompts.

#5) AI coming to Google Workspace 

I know we just talked about how you will soon be able to have AI write emails for you directly within Gmail… well, Google isn’t stopping there. 

You will also be able to use AI to help you write directly within google docs as well!

At the Google I/O conference, they shared an example of using AI to write a job description for you within Google Docs. This is a tedious task that most people don’t love to do.

Well, not you can simply tell the AI writing assistant the type of job you’re hiring for and it will spit out a really well-done job description that you can then tweak slightly based on your exact needs.

You can also use AI to help you create spreadsheets in Google Sheets. 

For example, you can ask the AI assistant to create a new spreadsheet for your hypothetical dog-walking business. You can tell it that you need to keep track of your clients, the type and number of dogs they have, length of the walk, price, and contact information, for example.

It will almost instantaneously create that type of spreadsheet for you and you can just tweak it as needed.

(I personally am SO excited about this cause I really dislike the tedium of creating things within spreadsheets! I can’t wait for the day where you can just ask the AI bot to do things like find the average of a column, for example, without having to type in a formula yourself.)

Another exciting way you can use this AI assistant is for meeting notes. Let’s pretend you took some shorthand notes about your meeting in a Google Doc. You can ask the AI to convert your notes into an email to send out to your team within Gmail.

Boom, time saved from tedious tasks, yet again.

Google Slides will also be getting some AI features as well that will allow you to do things like generate AI images to insert into your slide deck, based on the text on the slide, or instantly create AI-generated speaker notes for each slide, based on what information is present on the slides.

Overall, I think this category of AI developments might be the most useful for small business owners. And if you currently work as, say, a virtual assistant for a business owner, it would definitely benefit you to start learning how to leverage AI to be even more effective and efficient with the work you are performing!

Even though AI is making it much quicker and easier to perform certain tasks, I don’t think this will take the place of assistants in the workplace. The role just might change slightly. Instead of being the one in the trenches performing some of this tedious work, you will become the orchestrator of AI, learning how to prompt these tools to perform the tasks you want.

#6) Vertex AI can be used to create your own machine-learning models

Next, Google talked about a platform called Vertex AI, which Google Cloud users can use to create their own machine-learning models.

Honestly, this stuff starts to get super complex and I feel like I currently understand it more on a theoretical level than a practical one. Like theoretically, I understand what this tool can do, but I don’t really understand how to use it in real life, if that makes sense.

Like if I actually wanted to create my own machine learning model, I don’t really know what the actionable step one would be yet. This stuff is so complex and has a big learning curve, but I will definitely keep talking about it on this podcast as I continue to learn more!

But during Google I/O, they shared some interesting real-life examples of how big companies are using Vertex AI to integrate AI capabilities into their businesses.

For example, Canva is coming out with an AI-powered tool for video editing. Wendy’s is testing using AI to allow people to order food via a chatbot in the drive-through, and Replit is using AI to help people create their own apps without needing to know how to code.

They also talked about how to can refine your models using human feedback to make them even better. 

So obviously, this tool is what many businesses and innovators will be using to create new AI tools and services that leverage Google’s Palm 2 API. (And if you don’t know what that means, it means that you can use Google’s language AI technology within whatever tool you are trying to create, by connecting to it via their API.)

We will all be keeping a close eye on this space to see what sticks in this emerging area!

#7) Introducing Project Tailwind for Using AI Within Your Own Drive Folders

This Project Tailwind idea is another one that I think will be a BIG gamechanger for business owners, freelance writers, content creators, students, educators, and more.

It is not publicly available yet, you can only add yourself to a waitlist, but once you have access to it, you can essentially use AI to interact with and analyze JUST a specific set of documents. 

So this AI will NOT be connected to the internet or other outside sources. It will only analyze and pull from the information you specifically give it access to.

For example, let’s say you’re a college student. 

You could keep all of your class notes and assignments in a folder on Google Drive, plus PDFs or other resources provided by your teacher. 

Then you could import that folder into the Tailwind tool and chat with an AI chatbot that is ONLY able to access the docs in that folder. 

From there, the possibilities are endless. For example, you could ask it to summarize information for you to create study guides. 

You could ask it to create a glossary of key terms from your notes.

You could ask it what the different viewpoints are on XYZ topic and it will summarize those from your notes, with citations on where it found the information. 

But what this sparked for me was how useful it could be for scientific journalists as well. 

You could theoretically upload all the journal articles you found on a topic and ask the AI to synthesize data and key points for you, possibly with citations! (Not sure how accurate it will be, but it’s still exciting).

They even mentioned the idea of lawyers uploading all their case files and using the AI tool to help them prepare for a case, for example.

Soooo many possibilities here and I can’t wait to be able to try it out.

#8) The Battle Against Misinformation

Google closed out the presentation on AI by talking about the battle against misinformation online.

Given that AI can make up things and spit out incorrect information in a way that sounds very convincing, or the fact that some AI tools can be used to create things like deepfakes or images that are not real, Google is trying to come at this in a responsible way. 

They mentioned that they will start labeling images in Google that are AI-generated so that you can understand the source of an image. 

Just like you can right-click on a blog post and learn more about the source, soon you will also be able to do that for images. You will be able to see where the image first appeared on the internet, where else it is seen online (like in reputable news articles, for example) and whether it has been shared on social media, to help you determine whether it is a reliable image or a fake.

You can also search for it in google lens and if it is AI generated, it should be labeled as such within Google search

They are also putting safeguards into the AI responses so that it will not give incorrect answers to prompts that are trying to get it to spit out incorrect information.

For example, if you asked Bard to tell you why the moon landing was fake, that’s a leading question, right? You’re asking it to prove your viewpoint as you provided it. 

As of right now, if you ask Bard that question, it will tell you that the moon landing was NOT faked and why that is.

Or, another example I tested – I asked Bard to tell me why the earth is flat and it said “the earth is not flat. There is a vast amount of evidence to support this, including:” and then it bullet points the reasons why. 

This seems like it could be a super slippery slope and I don’t know who gets the final say over what is fact or not, but we will see how this goes in the future. I understand that the sentiment behind it is to prevent the spread of potentially dangerous misinformation and not contribute to the radicalization of people by giving them a chatbot that just tells them what they want to hear.

Google is also using something called the Perspective API to reduce the amount of toxicity online. This API has the ability to detect possibly toxic language and avoid using it. Google said that ALL large language models, including OpenAI’s ChatGPT, are trained with the Perspective API so that they don’t use potentially harmful language, slurs, etc. in their responses.

Final Thoughts

So there you have it. Eight new AI features that will be rolling out within Google in 2023. And if you want to add yourself to the waitlist for any of these features, you can find all the available waitlists at

To recap, the new upcoming features are:

  1. A new Help Me Write feature in Gmail so you don’t have to write tedious emails by hand anymore.
  2. Immersive street views for Google maps routes to help you find the route that best suits your needs.
  3. New AI photo editing tools to make your photos look exactly how you imagined.
  4. Upgrades to Bard, including the ability to export information to Google Docs and Google Sheets and soon new integrations via apps.
  5. New AI tools coming to Google Docs, Google Sheets, and Google Slides that will make it easier than ever to write, create and organize spreadsheets, and create high quality slideshows.
  6. The ability for businesses on Google Cloud to build their own large language models to bring AI capabilities to their businesses.
  7. Project Tailwind, for using AI to analyze a specific set of documents.
  8. Safeguards to protect against the spread of misinformation or harmful information via AI.

I know, it’s A LOT and the sheer magnitude of these changes sometimes hits you like a tidalwave and you’re like wow, work and life are probably going to look a lot different in the next 5 years as these AI technologies become more and more mainstream.

So I am really excited and interested in staying abreast of these developments, so again, if you’re interested in learning more about this too, I’d recommend signing up for my email list Just add your name to the form and you will start receiving all my helpful emails with tips for building an online presence using cutting edge tactics and technology, including AI.

And if you’re not yet a member of my free Facebook group, The Unconventional RD Community on Facebook, I highly recommend joining! It’s a free space to connect with over 14,000 other online wellness professionals interested in building an unconventional career. Come join us to chat all things business.

See you there!

Erica Julson is a registered dietitian turned digital marketing pro. She has over 12 years of experience blogging and building online businesses and has taught over 900 wellness professionals inside her signature program, SEO Made Simple.