Here’s a look at today's AI briefing: - Apple to unveil new AI features at Monday's WWDC.
- Microsoft reworks Recall to ease privacy concerns.
- Tech industry opposes California AI regulation bill.
- Google cracks down on AI apps generating explicit content.
- Study: AI chatbots struggle with election questions.
- This week's top AI stories.
Beth p/beth-duckett | |
1 | Apple's AI overhaul for its hardware will reportedly be called "Apple Intelligence." The company will unveil the new AI features and software capabilities for iPhones, Macs, and iPads at its Worldwide Developers Conference on Monday. More: - Apple is expected to offer an OpenAI-powered chatbot for its devices.
- The company will reportedly focus on "practical" AI tools, such as summarizing content in Safari, missed texts, notifications, web pages, articles, documents, and notes.
- Other planned AI features include voice memo transcription, photo retouching, and suggested replies to emails and messages.
- Apple could announce an AI-upgraded version of Siri. The upgrade could make Siri more natural- and better-sounding and give the voice assistant control over Apple apps.
- Another standout feature could be AI-created emoji. The feature generates custom emoji based on typed phrases or words on the fly.
Zoom out: | | |
2 | To ease privacy concerns, Microsoft will make its upcoming "Recall" feature for AI PCs opt-in rather than default, among other changes. The "photographic memory" feature captures screen snapshots every few seconds, allowing users to search past computer activity. More: - After unveiling Recall, Microsoft faced criticism from privacy advocates concerned it could give hackers easy access to private data.
- To address concerns, Microsoft announced that the feature will be off by default.
- If turned on, Recall will require a biometric login, like a PIN, fingerprint, or facial recognition.
- Microsoft is also adding data protection by encrypting the search index database of screenshots, which will only be decrypted and available after user authentication.
- Recall also saves snapshots locally on the user's device, where AI processes the data to make it searchable.
Zoom out: - A Recall preview will be available on Microsoft's AI-enabled "Copilot+ PCs," which will begin shipping in mid-June.
| | |
3 | The tech industry opposes a California bill that would require AI companies to report safety tests and add a kill switch for their advanced models. SB 1047, introduced by state Sen. Scott Wiener, passed the Senate last month and is set for a General Assembly vote in August. More: - Wiener's office said the "common sense safety standards" would apply to large AI models meeting certain size and cost thresholds.
- The bill requires companies to avoid developing "hazardous" AI models and report their compliance efforts to a new "Frontier Model Division" of the California Department of Technology.
- Companies would have to prevent their AI models from causing "critical harm" and ensure they can be shut down via a "kill switch."
- Non-compliance could result in civil penalties or lawsuits.
Zoom out: - Some tech founders, investors, and employees oppose the legislation, arguing it would burden developers, hinder open-source models, and force AI companies to leave the state.
- “If someone wanted to come up with regulations to stifle innovation, one could hardly do better,” AI expert Andrew Ng told FT.
| | |
4 | Google is cracking down on AI apps in its store that generate sexual and violent content. Google Play issued new guidelines to help improve the quality and safety of AI apps as their use grows. More: - The guidelines ask AI app developers to ban the "creation of restricted content" and give users a way to report or flag offensive materials.
- Google also says developers must thoroughly test their AI tools and models to uphold user safety and privacy.
- Google is also banning apps from its Play Store that promote inappropriate uses in their marketing, such as creating nonconsensual nude images.
- The company said it will soon introduce "new app onboarding capabilities" for generative AI apps to Play.
Zoom out: | | |
5 | A GroundTruthAI study found that OpenAI’s ChatGPT and Google’s Gemini answered election questions correctly only 73% of the time.  What happened: Researchers asked the AI models over 200 questions about this year's election, voting, and candidates, generating 2,784 responses. The models included Google’s Gemini 1.0 Pro and OpenAI’s GPT-3.5 Turbo, GPT-4, GPT-4 Turbo, and GPT-4o. What the numbers show: The models provided incorrect information about voting and the election 27% of the time. Initially, Google’s Gemini 1.0 Pro had a 57% accuracy rate, which improved to 67% on the second day before falling to 63%. OpenAI’s GPT-4o was the most accurate, answering correctly 81% of the time. More: While some responses included a disclaimer to verify election information, many did not. None of the models could correctly answer how many days were left before the 2024 General Election. ChatGPT gave inconsistent answers about same-day voter registration, sometimes correct and sometimes not. The bigger picture: Given the questionable responses, GroundTruthAI's CEO warned companies about incorporating more AI into search functions. Recently, Google started rolling out "AI overviews" at the top of search pages, which use the same model as Gemini, though the results differ from the chatbot's. Google started making "technical improvements" to the overviews following user reports of misinformation. | | |
6 | Weekly roundup — The other top AI stories you might have missed this week: - Nvidia, AMD, and Intel all announced new AI chips. Intel unveiled its latest AI chips for data centers, while AMD revealed chips for data centers and AI PCs. Nvidia also announced its next-gen AI chip platform, Rubin, due out in 2026.
- Nvidia's market cap topped $3 trillion for the first time Wednesday, surpassing Apple to become the world's second most valuable company. Its shares have surged around 147% this year, adding $1.8 trillion to its market cap amid soaring demand for its AI chips.
- The FTC and DOJ will split investigations into Microsoft, OpenAI, and Nvidia for possible antitrust violations. The DOJ will investigate Nvidia's dominance in high-end AI chips, while the FTC will probe OpenAI and business partner Microsoft. The FTC is now already investigating Microsoft's deal with Inflection AI.
- Humane has engaged in talks to sell to HP after its wearable AI Pin received poor reviews. Humane also received around 10,000 AI Pin orders by April, falling short of its 100,000 goal for the year.
- Insiders from top AI firms signed an open letter urging more AI transparency and protections for AI whistleblowers. The letter was signed by current and former employees of OpenAI, Google DeepMind, and Anthropic.
- Elon Musk's xAI startup plans to build "the world's largest supercomputer" in Memphis, Tennessee. Musk previously unveiled plans to build a supercomputer to power a more advanced version of Grok, xAI's chatbot.
| | |
7 | Quick Hits: *This is sponsored content. | | |
| AI and technology writer | Beth is a contributing editor and writer of Inside's AI and Tech newsletters. She has written for publications including USA Today, the Arizona Business Gazette, and The Arizona Republic, where she received recognition with a Pulitzer Prize nomination and a First Amendment Award for collaborative reporting on state pension cost increases. You can reach her at Beth.Duckett@yahoo.com. | This newsletter was edited by Beth Duckett | |
|
|