top of page

Design User Controls for Automation




The Level of Automation in an AI system shapes how users experience it, how much they trust it and how well it works. AI predictions are not certain. You need to give users clear control over when and how automation steps in. Good controls help users stay in charge, make fewer mistakes, and adjust the AI as their needs change. There are primarily two scenarios to keep in mind in this design pattern
  1. Allow Opt-in and Opt-out controls to build trust

No system can perfectly cater to every user in every situation, so it’s important to let users modify, refine, or disable AI-generated outputs as needed. Allow users to explicitly choose when to utilise automated AI assistance. This could involve toggles, sliders, or clear prompts.

For example
  1. In a Writing AI assistant, users might opt-in to grammar and spelling suggestions, but opt-out of style recommendations.
  2. An AI email assistant could prioritize urgent emails and auto-schedule meetings but users should could firm rules like "Never delete emails without my approval" or need prior human approval before sending emails


  1. Allow users to take control when AI fails

An AI system can make mistakes and sometimes fail. Incase of false positives and false negatives predictions, it is critical to design for graceful failure, when AI misclassifies or automates incorrectly, users should be able to easily review, correct, and override the system. For example, when Face ID fails on your iPhone or iPad, you'll be prompted to enter your passcode to unlock the device


bottom of page