ATUALIZA??O
29 setembro 2025
Apple’s Foundation Models framework unlocks new app experiences powered by Apple Intelligence
With the release of iOS 26, iPadOS 26, and macOS 26 this month, developers around the world are able to bring even more intelligent experiences right into their apps by tapping into the on-device large language model at the core of Apple Intelligence.1 The Foundation Models framework allows developers to create new intelligence features that protect users’ privacy and are available offline, all while using AI inference that is free of cost. Whether it be generating personalized quizzes to help students better prepare for an exam, or delivering insightful summaries of workout metrics, developers have embraced the framework to reimagine what’s possible within their apps, and help users in new and delightful ways.
“We’re excited to see developers around the world already bringing privacy-protected intelligence features into their apps. The in-app experiences they’re creating are expansive and creative, showing just how much opportunity the Foundation Models framework opens up,” said Susan Prescott, Apple’s vice president of Worldwide Developer Relations. “From generating journaling prompts that will spark creativity in Stoic, to conversational explanations of scientific terms in CellWalk, it’s incredible to see the powerful new capabilities that are already enhancing the apps people use every day.”
Apps spanning from health and fitness, to education and productivity, are already taking advantage of the Foundation Models framework. Here are just a few apps that have leveraged the framework to release new intelligence features that are available now.
Powering New Health and Fitness Experiences

SmartGym offers an elegant and easy way for users to plan and track their workouts. By tapping into the Foundation Models framework, the app enables users to describe a workout and turn it into a structured routine with sets, reps, rest times, and equipment adaptation. The app’s Smart Trainer feature continues to learn from a user’s workouts, and offers recommendations such as adjusting reps, changing weights, or creating new routines. Now, each suggestion includes a clear explanation so users understand the reasoning behind the adjustments.
SmartGym also generates insightful summaries of workout data, including monthly progress overviews, routine breakdowns, and individual exercise performance, all presented in a simple, easy-to-understand format. Additionally, users can receive coaching messages that adapt to their preferred style, and after completing a workout, they can add personal notes or generate an entire note automatically based on workout data. And every time a user opens the app, SmartGym greets them with a personalized, dynamic message generated in real time, informed by current fitness data.
“The Foundation Models framework enables us to deliver on-device features that were once impossible,” said Matt Abras, SmartGym’s CEO. “It’s simple to implement, yet incredibly powerful in its capabilities.”

Stoic is a journaling app that helps users better understand their emotions, and provides insights on how to be happier, more productive, and overcome obstacles. Leveraging the Foundation Models framework, users can receive hyperpersonal journaling prompts that are generated from their recent entries. For example, if a user logs a low mood or poor sleep, they’ll receive an encouraging, compassionate message. Stoic can also deliver context-aware app notifications to remind users of recent written entries or moods logged. The prompts are generated entirely on device, meaning that a user’s personal entries stay personal.
In addition, the app can now suggest contextual journaling prompts that invite reflection, as well as tailored starting phrases to help users kick-start an entry. Users can also reflect back on their past entries with enhanced views powered by the Foundation Models framework, including reading summaries of their journal entries, organizing related entries, and finding entries using the app’s improved natural language search.
“With the Foundation Models framework, prompts and reflections now adapt to a user’s state of mind, so the experience feels personal and evolves day by day,” said Maciej Lobodzinski, Stoic’s founder. “What amazed me was how quickly we could build these ideas. Features that once required heavy back-end infrastructure now run natively on device with minimal setup. That let our small team deliver huge value fast while keeping every user’s data private, with all insights and prompts generated without anything ever leaving their device.”



Additional health and fitness apps have tapped into the Foundation Models framework to power all-new experiences in their apps. SwingVision, an app that helps users with their tennis or pickleball skills, generates advice for players to improve their game. The app uses the Foundation Models framework to analyze videos of a user’s game provided as an output from Core ML models, giving highly actionable and specific feedback. 7 Minute Workout allows users to create dynamic workouts using natural language, such as specifying that they want to avoid exercises that would exacerbate an injury or if they’re preparing for an event. The app also delivers motivational feedback in a friendly, natural tone. Journaling app Gratitude generates detailed weekly summaries of challenges, wins, intentions, and suggested affirmations. Using the Foundation Models framework, the app additionally transforms journal entries into personal, context-aware affirmations.
Train Fitness also works with Apple’s Foundation Models framework to recommend a user’s next exercise when the required equipment is unavailable. Users can refine workouts by entering specific instructions, such as preferred exercise types or muscle limitations. Motivation, which provides users with positive reminders, organizes content that users favorite into emotional and thematic categories, while Streaks intelligently suggests and automatically categorizes tasks in a to-do list. Wakeout! will generate personalized movement breaks with detailed reasons on a user’s chosen exercise. By using generable structures, the foundation model chooses from thousands of videos available and creates the right routine for each user.
Unlocking Opportunities for Education Apps

The immersive biology app CellWalk lets users explore a detailed 3D cell down to the molecular level, or take a tour through life’s molecular machines. The app lets students and researchers select unfamiliar terms for further clarification. It uses the Foundation Models framework to generate a conversational explanation of the term, using tool calling to ground the responses based on the app’s scientific information. By setting a user profile, CellWalk tailors explanations to the learner’s knowledge level, while retaining history to reinforce learning.
“The on-device model has great performance,” said Tim Davison, the developer behind CellWalk. “Our visuals have always been interactive, but with the Foundation Models framework, the text itself comes alive. Scientific data hidden in our app becomes a dynamic system that adapts to each learner, and the reliable structured data produced by the model made integration with our app seamless.”
Education apps spanning a variety of topics have also leveraged the Foundation Models framework in their apps. Grammo’s AI tutor, which helps users learn about English grammar, gives conversational explanations of why an answer a user chose in an exercise was incorrect. In addition, Grammo now has a section of exercises for users that will create new questions on the fly if they want to go deeper into a topic. Lil Artist combined the capabilities of the Foundation Models framework and the ImageCreator API to customize illustrated stories for children. Children select characters and themes within the app’s user interface instead of using an open-ended text field to make it more approachable and engaging.
As users save words they want to memorize in Vocabulary, the on-device foundation model uses natural language understanding to categorize the words into custom themes like “Verbs,” “Anatomy,” or “Difficult,” keeping the app organized and helpful for the user’s further review and practice. And in Platzi, an expansive education platform for Spanish speakers, users will be able to ask specific questions about the content they’re currently viewing in a video and receive a speedy response. By grounding the on-device model in the context of the lesson, the app is able to conversationally answer questions the user might have about the specific lesson.
Inspiring New Creativity and Productivity Features

Stuff is designed to keep track of the dozens of to-dos that come to users’ minds throughout the day, helping them organize their lives and achieve their goals. Thanks to the Foundation Models framework, Stuff now understands dates, tags, and lists as users type. They can simply write “Call Sophia Friday,” and Stuff populates the details instantly into the appropriate places. With Listen Mode, a user can speak their thoughts, such as saying “Do laundry tonight” and “Prep for trip next weekend,” and Stuff turns them into organized, editable tasks. In Scan Mode, users can capture handwritten tasks, even from paragraphs or scribbles, and add them directly to Stuff.
“The Foundation Models framework in iOS 26 has been a game changer for new workflows in Stuff,” said Austin Blake, the developer behind Stuff. “Running entirely on device, it’s powerful, predictable, and remarkably performant. Its simplicity made it possible for me to launch both Listen Mode and Scan Mode together in a single release — something that would’ve taken much longer otherwise.”

Whether a user is making their first vlog or creating content for their social channels, VLLO’s intuitive interface makes video editing feel natural and fun. VLLO takes video editing to the next level by seamlessly integrating the Foundation Models framework and Apple’s Vision framework. VLLO intelligently analyzes a video preview, and automatically suggests the perfect background music and dynamic stickers tailored to each scene.
“VLLO seamlessly integrates the Foundation Models framework with Vision technologies to lower the barriers that often stand in the way of new creators,” said Kyunghyun Lee, Vimosoft’s CEO and iOS developer. “Using both Apple’s Foundation Models and Vision frameworks, we were able to build advanced recommendation features quickly and efficiently — without implementing complex algorithms — using only simple prompts.”
Additional creativity and productivity apps have released new experiences in their apps by tapping into the Foundation Models framework. Signeasy can now generate summaries, highlight key points, and support a conversational interface where users can ask document-specific questions and receive quick responses. Agenda created Ask Agenda, an intelligent assistant that users can tap into to ask questions about their library of notes. Ask Agenda will search for relevant information and give a plain language response with links to the most relevant notes.
Also, Detail: AI Video Editor leverages the Foundation Models framework to turn a draft or outline into a ready-to-record teleprompter script. When a video is ready to share, a title, description, hashtags, and messages can all be automatically generated. Essayist taps into Apple’s Vision framework alongside the Foundation Models framework to extract information from PDFs and convert them into structured references and citations. Users can simply drag and drop a PDF and instantly generate a reference in the citation style of their choice. OmniFocus 4 — an app that helps users create tasks on the fly and organize them with projects, tags, and dates — can now generate projects and next steps on a user’s behalf, such as helping them know what to pack for an upcoming trip. The app can automatically fill in suggestions based on existing tags and can even propose new tags for captured content.
Building with the Foundation Models Framework
The Foundation Models framework is tightly integrated with Swift, making it easy for developers to send requests to the 3 billion parameter on-device model right from their existing code. The framework provides guided generation, which ensures that the models respond with a consistent format that developers can rely on. Developers can provide tools to the model that call back into the app when the model requires more information for processing, ensuring that the model has all the right information to respond. The Foundation Models framework is available with iOS 26, iPadOS 26, and macOS 26, and works on any Apple Intelligence-compatible device when Apple Intelligence is enabled.
Partilhe o artigo
Media
-
Texto do artigo
-
Media in this article
- Apple Intelligence is available in beta with support for these languages: English, French, German, Italian, Portuguese (Brazil), Spanish, Chinese (simplified), Japanese, and Korean. Some features may not be available in all regions or languages. For feature and language availability and system requirements, see support.apple.com/en-us/121115.