This Week's Sponsor:

Copilot Money

The Apple Editor’s Choice Award App for Tracking Your Money. Start Your Free Trial Today


Posts tagged with "AI"

Post-Chat UI

Fascinating analysis by Allen Pike on how, beyond traditional chatbot interactions, the technology behind LLMs can be used in other types of user interfaces and interactions:

While chat is powerful, for most products chatting with the underlying LLM should be more of a debug interface – a fallback mode – and not the primary UX.

So, how is AI making our software more useful, if not via chat? Let’s do a tour.

There are plenty of useful, practical examples in the story showing how natural language understanding and processing can be embedded in different features of modern apps. My favorite example is search, as Pike writes:

Another UI convention being reinvented is the search field.

It used to be that finding your flight details in your email required typing something exact, like “air canada confirmation”, and hoping that’s actually the phrasing in the email you’re thinking of.

Now, you should be able to type “what are the flight details for the offsite?” and find what you want.

Having used Shortwave and its AI-powered search for the past few months, I couldn’t agree more. The moment you get used to searching without exact queries or specific operators, there’s no going back.

Experience this once, and products with an old-school text-match search field feel broken. You should be able to just find “tax receipts from registered charities” in your email app, “the file where the login UI is defined” in your IDE, and “my upcoming vacations” in your calendar.

Interestingly, Pike mentions Command-K bars as another interface pattern that can benefit from LLM-infused interactions. I knew that sounded familiar – I covered the topic in mid-November 2022, and I still think it’s a shame that Apple hasn’t natively implemented these anywhere in their apps, especially now that commands can be fuzzier (just consider what Raycast is doing). Funnily enough, that post was published just two weeks before the public debut of ChatGPT on November 30, 2022. That feels like forever ago now.

Permalink

Sundar Pichai Testifies That He Hopes Gemini Will Be Integrated into iPhones This Fall

Ever since Apple announced its deal to integrate ChatGPT into Siri, there have been hints that the company wanted to make deals with other AI providers, too. Alphabet CEO Sundar Pichai has added fuel to the rumors with testimony given today in the remedy phase of the search antitrust case brought against it by the U.S. Department of Justice.

In response to questions by a DOJ prosecutor, Pichai testified that he hoped Google Gemini would be added to iPhones this year. According to a Bloomberg story co-authored by Mark Gurman, Davey Alba, and Leah Nylen:

Pichai said he held a series of conversations with Apple Chief Executive Officer Tim Cook across 2024 and he hopes to have a deal done by the middle of this year.

This news isn’t surprising, but it is welcome. Despite Google’s early stumbles with Bard, its successor, Gemini, has improved by leaps and bounds in recent months and has the advantage of being integrated with many of Google’s other products that have a huge user base. What will be interesting to see is whether Gemini is integrated as an alternative fallback for Siri requests or whether Apple and Google ink a broader deal that integrates Gemini into other aspects of iOS.

Permalink

Sycophancy in GPT-4o

OpenAI found itself in the middle of another controversy earlier this week, only this time it wasn’t about publishers or regulation, but about its core product – ChatGPT. Specifically, after rolling out an update to the default 4o model with improved personality, users started noticing that ChatGPT was adopting highly sycophantic behavior: it weirdly agreed with users on all kinds of prompts, even about topics that would typically warrant some justified pushback from a digital assistant. (Simon Willison and Ethan Mollick have a good roundup of the examples as well as the change in the system prompt that may have caused this.) OpenAI had to roll back the update and explain what happened on the company’s blog:

We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.

We are actively testing new fixes to address the issue. We’re revising how we collect and incorporate feedback to heavily weight long-term user satisfaction and we’re introducing more personalization features, giving users greater control over how ChatGPT behaves.

And:

We also believe users should have more control over how ChatGPT behaves and, to the extent that it is safe and feasible, make adjustments if they don’t agree with the default behavior.

Today, users can give the model specific instructions to shape its behavior with features like custom instructions. We’re also building new, easier ways for users to do this. For example, users will be able to give real-time feedback to directly influence their interactions and choose from multiple default personalities.

“Easier ways” for users to adjust ChatGPT’s behavior sound to me like a user-friendly toggle or slider to adjust ChatGPT’s personality (Grok has something similar, albeit unhinged), which I think would be a reasonable addition to the product. I’ve long argued that Siri should come with an adjustable personality similar to CARROT Weather, which lets you tweak whether you want the app to be “evil” or “professional” with a slider. I increasingly feel like that sort of option would make a lot of sense for modern LLMs, too.

Permalink

What Siri Isn’t: Perplexity’s Voice Assistant and the Potential of LLMs Integrated with iOS

Perplexity's voice assistant for iOS.

Perplexity’s voice assistant for iOS.

You’ve probably heard that Perplexity – a company whose web scraping tactics I generally despise, and the only AI bot we still block at MacStories – has rolled out an iOS version of their voice assistant that integrates with several native features of the operating system. Here’s their promo video in case you missed it:

This is a very clever idea: while other major LLMs’ voice modes are limited to having a conversation with the chatbot (with the kind of quality and conversation flow that, frankly, annihilates Siri), Perplexity put a different spin on it: they used native Apple APIs and frameworks to make conversations more actionable (some may even say “agentic”) and integrated with the Apple apps you use every day. I’ve seen a lot of people calling Perplexity’s voice assistant “what Siri should be” or arguing that Apple should consider Perplexity as an acquisition target because of this, and I thought I’d share some additional comments and notes after having played with their voice mode for a while.

Read more


How Federico Turns Voice Recordings into Searchable Obsidian Notes with Shortcuts, Hazel, and LLMs

Automation on the Mac is powerful because you have so many choices when building a workflow. Now, with large language models, you can do even more, which is the approach Federico took in his latest Automation Academy lesson for Club MacStories Plus and Premier members:

I built a hybrid automation to bridge spoken words and Markdown – a system that combines the non-deterministic nature of human language and messy voice recordings with the reliability of Shortcuts, the power of Hazel rules on macOS, and the flexibility of LLMs, which are ideal for processing natural language. The system revolves around a shortcut called Process Transcript that takes the raw transcript of a voice recording and turns it into a structured note in Obsidian, complete with a summary, action items, an embedded audio player, and an internal link to the full transcript.

It’s an amazing automation that takes his audio notes, transcribes them into text, structures the results in an Obsidian template that includes extracted tasks, and embeds the original audio file and transcript for reference. Along the way, Federico used Simon Willison’s llm CLI, Google Gemini 2.5 Pro Hazel, Shortcuts, and other tools. It’s a great example of how to make the most of automation on the Mac.


Automation Academy is just one of the many Club MacStories perks.

Automation Academy is just one of the many Club MacStories perks.

Automation Academy is just one of many perks that Club MacStories Plus and Club Premier members enjoy including:

  • Weekly and monthly newsletters 
  • A sophisticated web app with search and filtering tools to navigate eight years of content
  • Customizable RSS feeds
  • Bonus columns
  • An early and ad-free version of our Internet culture and media podcast, MacStories Unwind
  • A vibrant Discord community of smart app and automation fans who trade a wealth of tips and discoveries every day
  • Live Discord audio events after Apple events and at other times of the year

On top of that, Club Premier members get AppStories+, an extended, ad-free version of our flagship podcast that we deliver early every week in high-bitrate audio.

Use the buttons below to learn more and sign up for Club MacStories+ or Club Premier.

Join Club MacStories+:

Join Club Premier:

Permalink

Apple Is Using Differential Privacy to Improve Apple Intelligence

Apple has been using differential privacy for nearly ten years to collect its users data in a way that isn’t traceable back to an individual. As Apple explains in a recent post on its Machine Learning Research site:

This approach works by randomly polling participating devices for whether they’ve seen a particular fragment, and devices respond anonymously with a noisy signal. By noisy, we mean that devices may provide the true signal of whether a fragment was seen or a randomly selected signal for an alternative fragment or no matches at all. By calibrating how often devices send randomly selected responses, we ensure that hundreds of people using the same term are needed before the word can be discoverable.

The company has used the technique to analyze everything from the popularity of emoji to what words to suggest with QuickType.

Now, Apple is using differential privacy to mine the data of users who have opted into sharing device analytics to improve Apple Intelligence. So far, the technique’s use has been limited to improving Genmoji, but in upcoming OS releases, it will be used for “Image Playground, Image Wand, Memories Creation and Writing Tools in Apple Intelligence, as well as in Visual Intelligence,” too.

The report explains that:

Building on our many years of experience using techniques like differential privacy, as well as new techniques like synthetic data generation, we are able to improve Apple Intelligence features while protecting user privacy for users who opt in to the device analytics program. These techniques allow Apple to understand overall trends, without learning information about any individual, like what prompts they use or the content of their emails. As we continue to advance the state of the art in machine learning and AI to enhance our product experiences, we remain committed to developing and implementing cutting-edge techniques to protect user privacy.

For Genmoji, this means collecting data on the most popular prompts used to create the emoji-like images. Apple explains that written content is more challenging but that it can use an LLM to generate synthetic data like emails. The synthetic data is then sent to users’ devices who have opted into device analytics to determine which data matches actual user data most closely and frequently, again using differential privacy to prevent individual device identification.

Using differential privacy to improve Apple Intelligence without directly scraping user data is clever, but it does make me wonder why something similar wasn’t used to generate Apple’s large language models that were trained on the contents of the Internet. Perhaps that’s not possible at the scale of an LLM, or maybe that initial model needs a level of precision that differential privacy doesn’t offer, but I think it’s fair to ask.

Permalink

How Could Apple Use Open-Source AI Models?

Yesterday, Wayne Ma, reporting for The Information, published an outstanding story detailing the internal turmoil at Apple that led to the delay of the highly anticipated Siri AI features last month. From the article:

In November 2022, OpenAI released ChatGPT to a thunderous response from the tech industry and public. Within Giannandrea’s AI team, however, senior leaders didn’t respond with a sense of urgency, according to former engineers who were on the team at the time.

The reaction was different inside Federighi’s software engineering group. Senior leaders of the Intelligent Systems team immediately began sharing papers about LLMs and openly talking about how they could be used to improve the iPhone, said multiple former Apple employees.

Excitement began to build within the software engineering group after members of the Intelligent Systems team presented demos to Federighi showcasing what could be achieved on iPhones with AI. Using OpenAI’s models, the demos showed how AI could understand content on a user’s phone screen and enable more conversational speech for navigating apps and performing other tasks.

Assuming the details in this report are correct, I truly can’t imagine how one could possibly see the debut of ChatGPT two years ago and not feel a sense of urgency. Fortunately, other teams at Apple did, and it sounds like they’re the folks who have now been put in charge of the next generation of Siri and AI.

There are plenty of other details worth reading in the full story (especially the parts about what Rockwell’s team wanted to accomplish with Siri and AI on the Vision Pro), but one tidbit in particular stood out to me: Federighi has now given the green light to rely on third-party, open-source LLMs to build the next wave of AI features.

Federighi has already shaken things up. In a departure from previous policy, he has instructed Siri’s machine-learning engineers to do whatever it takes to build the best AI features, even if it means using open-source models from other companies in its software products as opposed to Apple’s own models, according to a person familiar with the matter.

“Using” open-source models from other companies doesn’t necessarily mean shipping consumer features in iOS powered by external LLMs. I’ve seen some people interpret this paragraph as Apple preparing to release a local Siri powered by Llama 4 or DeepSeek, and I think we should pay more attention to that “build the best AI features” (emphasis mine) line.

My read of this part is that Federighi might have instructed his team to use distillation to better train Apple’s in-house models as a way to accelerate the development of the delayed Siri features and put them back on the company’s roadmap. Given Tim Cook’s public appreciation for DeepSeek and this morning’s New York Times report that the delayed features may come this fall, I wouldn’t be shocked to learn that Federighi told Siri’s ML team to distill DeepSeek R1’s reasoning knowledge into a new variant of their ∼3 billion parameter foundation model that runs on-device. Doing that wouldn’t mean that iOS 19’s Apple Intelligence would be “powered by DeepSeek”; it would just be a faster way for Apple to catch up without throwing away the foundational model they unveiled last year (which, supposedly, had a ~30% error rate).

In thinking about this possibility, I got curious and decided to check out the original paper that Apple published last year with details on how they trained the two versions of AFM (Apple Foundation Model): AFM-server and AFM-on-device. The latter would be the smaller, ~3 billion model that gets downloaded on-device with Apple Intelligence. I’ll let you guess what Apple did to improve the performance of the smaller model:

For the on-device model, we found that knowledge distillation (Hinton et al., 2015) and structural pruning are effective ways to improve model performance and training efficiency. These two methods are complementary to each other and work in different ways. More specifically, before training AFM-on-device, we initialize it from a pruned 6.4B model (trained from scratch using the same recipe as AFM-server), using pruning masks that are learned through a method similar to what is described in (Wang et al., 2020; Xia et al., 2023).

Or, more simply:

AFM-server core training is conducted from scratch, while AFM-on-device is distilled and pruned from a larger model.

If the distilled version of AFM-on-device that was tested until a few weeks ago produced a wrong output one third of the time, perhaps it would be a good idea to perform distillation again based on knowledge from other smarter and larger models? Say, using 250 Nvidia GB300 NVL72 servers?

(One last fun fact: per their paper, Apple trained AFM-server on 8192 TPUv4 chips for 6.3 trillion tokens; that setup still wouldn’t be as powerful as “only” 250 modern Nvidia servers today.)

Permalink

A Peek Into LookUp’s Word of the Day Art and Why It Could Never Be AI-Generated

Yesterday, Vidit Bhargava, developer of the award-winning dictionary app LookUp, wrote on his blog about the way he hand-makes each piece of artwork that accompanies the app’s Word of the Day. While revealing that he has employed this practice every day for an astonishing 10 years, Vidit talked about how each image is made from scratch as an illustration or using photography that he shoots specifically for the design:

Each Word of the Day has been illustrated with care, crafting digital illustrations, picking the right typography that conveys the right emotion.

Some words contain images, these images are painstakingly shot, edited and crafted into a Word of the Day graphic by me.

I’ve noticed before that each Word of the Day image in LookUp seemed unique, but I assumed Vidit was using stock imagery and illustrations as a starting point each time. The revelation that he is creating almost all of these from scratch every single day was incredible and gave me a whole new level of respect for the developer.

The idea of AI-generated art (specifically art that is wholly generated from scratch by LLMs) is something that really sticks in my throat – never more so than with the recent rip-off of the beautiful, hand-drawn Studio Ghibli films by OpenAI. Conversely, Vidit’s work shows passion and originality.

To quote Vidit, “Real art takes time, effort and perseverance. The process is what makes it valuable.”

You can read the full blog post here.


Using Simon Willison’s LLM CLI to Process YouTube Transcripts in Shortcuts with Claude and Gemini

Video Processor.

Video Processor.

I’ve been experimenting with different automations and command line utilities to handle audio and video transcripts lately. In particular, I’ve been working with Simon Willison’s LLM command line utility as a way to interact with cloud-based large language models (primarily Claude and Gemini) directly from the macOS terminal.

For those unfamiliar, Willison’s LLM CLI tool is a command line utility that lets you communicate with services like ChatGPT, Gemini, and Claude using shell commands and dedicated plugins. The llm command is extremely flexible when it comes to input and output; it supports multiple modalities like audio and video attachments for certain models, and it offers custom schemas to return structured output from an API. Even for someone like me – not exactly a Terminal power user – the different llm commands and options are easy to understand and tweak.

Today, I want to share a shortcut I created on my Mac that takes long transcripts of YouTube videos and:

  1. reformats them for clarity with proper paragraphs and punctuation, without altering the original text,
  2. extracts key points and highlights from the transcript, and
  3. organizes highlights by theme or idea.

I created this shortcut because I wanted a better system for linking to YouTube videos, along with interesting passages from them, on MacStories. Initially, I thought I could use an app I recently mentioned on AppStories and Connected to handle this sort of task: AI Actions by Sindre Sorhus. However, when I started experimenting with long transcripts (such as this one with 8,000 words from Theo about Electron), I immediately ran into limitations with native Shortcuts actions. Those actions were running out of memory and randomly stopping the shortcut.

I figured that invoking a shell script using macOS’ built-in ‘Run Shell Script’ action would be more reliable. Typically, Apple’s built-in system actions (especially on macOS) aren’t bound to the same memory constraints as third-party ones. My early tests indicated that I was right, which is why I decided to build the shortcut around Willison’s llm tool.

Read more


OSZAR »