Fresh Resources for Web Designers and Developers (February 2024)

Original Source: https://www.hongkiat.com/blog/designers-developers-monthly-02-2025/

It’s time for our monthly roundup!

We’ve gathered a bunch of useful resources for our fellow web developers, including several AI-powered tools, libraries, and other cool stuff. We hope you find them as exciting as we do!

Let’s jump in to see the full list.

Qwen Chat

Qwen Chat is an AI-powered conversational app developed by Alibaba Cloud. It uses their Qwen 2.5 series LLM. It’s not just for chatting – it can also interpret and generate images and handle audio inputs. It supports over 29 languages and can manage conversations with up to 128,000 tokens, allowing for detailed and context-rich interactions. It’s also free, even letting you use it without creating an account. A solid alternative to OpenAI’s chat models.

Qwen Chat AI conversational interface powered by Alibaba Cloud

Orate

Orate is a Node.js library that makes it easy to generate realistic speech, transcribe audio, and modify voices through a single, unified API. It integrates with OpenAI, ElevenLabs, and AssemblyAI, so you can switch between services without worrying about API complexities or differences. It supports text-to-speech, speech-to-text, and speech-to-speech, making it a great library for adding advanced speech capabilities to your project.

Orate Node.js library for AI-driven speech processing

Loras

Loras.dev is an open-source AI image generator that allows you to create stylized images in seconds by entering a prompt and choosing a Low-Rank Adaptation (LoRA). It is powered by Flux LoRAs through Together AI, and it is completely free to use.

Loras AI image generator interface with LoRA adaptation options

TraeAI

Trae.ai is an AI-powered IDE by ByteDance, the company behind TikTok. One of its distinctive features is the “Builder Mode”, which allows you to generate app prototypes quickly, and the “Chat Mode”, which helps you analyze and improve code.

It is available on macOS and will soon be on Windows. It supports both Chinese and English and includes free access to GPT-4o and Claude-3.5-Sonnet.

TraeAI intelligent IDE with Builder and Chat Mode by ByteDance

GPT Crawler

GPT Crawler is an open-source tool by BuilderIO that helps you gather website content and turn it into a knowledge file for creating a custom GPT. It crawls web pages, extracts key information like titles, URLs, and text, and saves it as a JSON file. You can then upload this file to OpenAI to build a chatbot or assistant with up-to-date information from any website.

GPT Crawler extracting website content into a JSON file for AI training

AiderAI

Aider is an AI-powered command-line tool for pair programming with models like GPT-4o, Claude 3.5 Sonnet, and Deepseek. It integrates with Git, allowing it to track changes and create commits automatically. It supports multiple languages, provides context-aware suggestions, and includes features like image uploads and voice commands. A well-rounded tool for AI-powered coding assistance!

Aider AI-powered CLI tool for pair programming with Git integration

InvokeAI

InvokeAI is an AI-powered creative engine for generating images through a web-based interface. You can generate images from prompts, fine-tune outputs with negative prompts, and automate tasks through its API. It’s available on Windows, macOS, and Linux.

InvokeAI creative engine generating AI-powered images via web interface

CodeCapy

CodeCapy is a GitHub app that generates natural language descriptions of code changes and automatically tests them when you open a pull request. It writes and runs UI tests for you, eliminating the need for manual testing. It removes the mundane tasks of creating and running tests.

CodeCapy GitHub app automating code change documentation and testing

AI Shell

AI Shell is a command-line tool that converts plain English into shell commands. It features a chat mode for interactive help, a silent mode for quick outputs, and customizable settings. If you’re not a bash expert, this tool makes it easier by generating commands with clear explanations.

AI Shell CLI tool converting natural language into shell commands

Oh One Pro

Oh One Pro is a free macOS tool for analyzing documents using ChatGPT’s o1-pro and o3-mini models. It allows you to convert PDFs and code into XML or images and efficiently handles files with simple drag-and-drop functionality. It’s especially useful for developers, researchers, and anyone needing AI-assisted document analysis on macOS.

Oh One Pro macOS tool for AI-assisted document and code analysis

NextChat

NextChat is an open-source, cross-platform AI app that supports multiple models like GPT-4, GPT-3.5, and Gemini-Pro. It provides customizable chatbot experiences with local data storage, Markdown support, and multi-language interfaces. It is compatible with various platforms, supports self-hosted models, and is available as a desktop app, web app, or deployable on Vercel.

NextChat AI chatbot with support for GPT-4, GPT-3.5, and Gemini-Pro

Sonic

Sonic is a platform developed by Tencent and Zhejiang University that generates realistic portrait animations from audio. It produces natural facial expressions, smooth head movements, and stable videos in various styles and resolutions. A great tool for creating animated characters for your projects.

Sonic AI-powered portrait animation tool generating realistic facial expressions

Posting

Posting is a tool that lets you send and test HTTP requests directly from your terminal. It’s like Postman or Insomnia but designed for developers who prefer working in the command line or over SSH. You can import API details using OpenAPI specs and save your requests in easy-to-read YAML files, making it easier to track changes. If you’re into coding and need a fast, no-frills way to work with APIs, this tool is worth checking out!

Posting command-line tool for testing HTTP requests and APIs

Speaches

Speaches is designed to make working with speech technology easier and more efficient. It helps process, analyze, and generate speech content, including transcribing audio, summarizing spoken text, and creating natural-sounding voice outputs. A great tool for anyone working with speech technology, whether you’re a beginner or an experienced developer.

Speaches AI tool for speech processing, transcription, and voice generation

Toolpad

Toolpad, created by MUI, is an open-source, low-code tool that helps developers quickly build React apps and dashboards. It simplifies the creation of internal tools with a drag-and-drop interface while allowing you to use your own React components. A great tool if you need to build an app quickly.

Toolpad low-code platform for building React apps and dashboards

Dashy

Dashy is a self-hostable, customizable dashboard designed to help you organize and access your self-hosted apps quickly and securely. It offers features like status checks, widgets, themes, and a UI editor, with optional basic authentication for added security. If you’re looking for a simple way to streamline your home lab or app management, Dashy is worth exploring!

Dashy self-hosted customizable dashboard for managing apps

Neko

Neko is a self-hosted virtual browser that runs in a Docker container. It allows you to securely access the internet or collaborate with others in a shared browsing environment. It supports various configurations, including ARM-based systems. While it provides flexibility for advanced users, it also includes zero-knowledge installation options for beginners, making it easy to set up even if you don’t have Docker experience.

Neko self-hosted virtual browser running in a Docker container

StandardSchema

StandardSchema is a tool designed for TypeScript libraries to encourage interoperability among validator tools in JavaScript. By following this standard, tools can work with these rules without needing extra adapters for each different library. This allows developers to set things up once and use them anywhere, saving time and effort.

StandardSchema tool for TypeScript interoperability in JavaScript validation

Dagger

Dagger is a library for creating components in Laravel’s Blade templating engine, inspired by Laravel’s anonymous components but with extra features.

Its standout feature is the compiler, which efficiently inlines your component code, performs optimizations, and introduces powerful capabilities like Attribute Cache, Attribute Forwarding, and Slot Forwarding. If you’re a Laravel developer looking to streamline component creation and boost performance, Dagger is definitely worth checking out!

Dagger Laravel Blade component library for enhanced templating

NginxUI

Nginx UI is a web-based tool that simplifies Nginx server management by offering a clean, user-friendly dashboard with real-time stats. It features server block management, SSL/TLS setup, and reverse proxy configuration. If you’re looking for an efficient way to manage your Nginx server without hassle, Nginx UI is definitely worth exploring!

Nginx UI web-based tool for easy Nginx server management

The post Fresh Resources for Web Designers and Developers (February 2024) appeared first on Hongkiat.

Behind the fine print: Understanding AI apps and privacy

Original Source: https://ecommerce-platforms.com/articles/understanding-ai-apps-and-privacy

Artificial intelligence has quickly become part of the contemporary zeitgeist — yet ethical considerations around the subject remain unresolved. How many users are fully aware of what they’re signing up to?

Here, by honing in on the terms and conditions and privacy policies behind the most popular AI tools and apps, Ecommerce Platforms unpacks what you need to know when using these tools for your day-to-day business needs.

Continue reading Behind the fine print: Understanding AI apps and privacy

The post Behind the fine print: Understanding AI apps and privacy appeared first on Ecommerce Platforms.

WebGPU Fluid Simulations: High Performance & Real-Time Rendering

Original Source: https://tympanus.net/codrops/2025/02/26/webgpu-fluid-simulations-high-performance-real-time-rendering/

A detailed look at the techniques behind high-performance, real-time, and visually stunning fluid simulations with WebGPU.

How to Download TikTok Videos (2025 Version)

Original Source: https://www.hongkiat.com/blog/download-tiktok-videos/

Ever found yourself wanting to save that perfect TikTok video but weren’t sure how? Whether you’re looking to build a collection of your favorite dance moves, cooking recipes, or just want to share videos with friends offline, I’ve got you covered.

In this guide, I’ll walk you through several proven methods to download TikTok videos, from the simplest built-in options to powerful batch downloading solutions.

Download Any TikTok Video Directly

TikTok actually makes it super straightforward to save videos right from the app. Here’s what you need to do:

Fire up your TikTok app
Locate that video you’re want to save
Look for the Share icon (it’s that arrow pointing to the right)
Hit “Save video”

TikTok mobile app save video button interface

Tip: You can also just long-press on any video to bring up the sharing menu and tap “Save video” – it’s a neat little shortcut I use all the time.

Once you do this, the video will automatically land in your device’s photo library. Simple, right?

But here’s the catch – sometimes you might notice the download option isn’t there. Don’t worry, you’re not doing anything wrong. This usually happens when creators lock down their content to protect their work. If you run into this situation, keep reading…

Quick heads up: This direct download method only works on the mobile app. If you’re browsing TikTok on your desktop, you’ll need to try one of the other methods I’m about to share.

Check out this guide on how to remove Tiktok watermark.

Batch Download TikTok Videos

If you’re looking to download multiple TikTok videos at once (maybe you’re archiving your favorite creator’s content?), PullTube by MyMixApps is your new best friend. Here’s why I love it and how to use it:

Start by downloading and installing PullTube on your Mac
Copy your TikTok video URLs
Either paste them directly or use the + button to add multiple URLs (you can separate them with spaces or new lines)
Select the videos you want and click “Download video“

PullTube interface showing URL paste options

Tip: Enable the “Convert Encoded videos to MP4” option if you want your downloads in MP4 format – it makes them much more compatible with most devices.

PullTube MP4 conversion settings

PullTube gives you a 7-day free trial, after which it costs $14.99 as a one-time purchase.

The Drag-and-Drop Champion: Downie

While PullTube is great, let me introduce you to my personal favorite: Downie by Charlie Monroe Software. What makes it special? Its incredibly intuitive drag-and-drop interface.

You literally just drag TikTok video URLs into the app, and it starts downloading automatically. No fuss, no complicated settings.

Downie app drag and drop interface demonstration

And yes, it handles batch downloads just as well as PullTube – just drag in multiple URLs, and you’re good to go.

Downie batch download process in action

Downie offers a 14-day trial period, with a one-time cost of $19.99 afterward. While it’s slightly pricier than PullTube, its seamless user experience makes it my go-to choice.

My Final Take

Downloading TikTok videos doesn’t have to be complicated. When the built-in save option works, it’s perfect for quick, one-off downloads. For those times when you need more flexibility or batch downloading capabilities, both PullTube and Downie are excellent options.

Personally, I’ve settled on Downie as my primary tool because of its simplicity and reliability, but I keep PullTube installed as a backup.

Remember to respect creators’ content and only download videos for personal use. Also, check out our guide on how to turn TikTok videos into ringtones.

Happy downloading!

The post How to Download TikTok Videos (2025 Version) appeared first on Hongkiat.

10 Common Web Development Mistakes to Avoid Right Now

Original Source: https://www.sitepoint.com/common-web-development-mistakes/?utm_source=rss

Discover the top 10 common web development mistakes that can hurt your site’s performance, user experience, and SEO.

Continue reading
10 Common Web Development Mistakes to Avoid Right Now
on SitePoint.

5 Best AI-Powered Git Commit Message Tools Compared

Original Source: https://www.hongkiat.com/blog/best-ai-tools-for-git-commit-messages/

Writing good Git commit messages is important for maintaining a clear project history, but it can often feel like a chore. AI-powered tools simplify this process by helping you create messages quickly and easily.

In this article, we’ll review five of these tools. Let’s dive in to see how they work, the benefits they offer, and any limitations you should consider.

GitHub Copilot

GitHub Copilot is a popular AI tool developed by GitHub. Once enabled, it can help you boost productivity by suggesting code snippets, completing lines of code, and generating commit messages based on changes in the code.

It integrates seamlessly with Visual Studio Code (VSCode). Once you’ve enabled Copilot in VSCode, you can find the small sparkle icon within the Git commit input.

Simply click the icon to generate the commit message. For the best results, I recommend staging files with related changes before generating a commit message.

GitHub Copilot Git commit message generation example in VSCode

Pros:

Reliable and consistent at generating accurate commit messages based on file changes.
Deep integration with the GitHub ecosystem, VSCode, and other popular IDEs like JetBrains IDE through plugins.
Free tier available.

Cons:

Free tier has usage limits. Features may not be usable if the limit is reached.
By default, it only generates short, basic messages-no full descriptions or custom formats like Commitizen.
No Ollama support.

CursorAI

CursorAI is an AI-focused code editor that includes a built-in tool for generating commit messages. Since it’s based on the same editor as Visual Studio Code, it works similarly. You’ll find a sparkle icon in the Git commit input within the “Source Control” panel-click it to generate a message.

However, in my experience, it often produces less accurate commit messages compared to GitHub Copilot.

For instance, with the same staged files and changes (see the GitHub Copilot section above), GitHub Copilot correctly identifies renamed files and improved structure, while CursorAI describes them as additions instead, as shown below:

CursorAI Git commit message comparison showing inaccurate results

Pros:

AI feature works out of the box without additional extensions or plugins.
A free tier is available for accessing the AI tools.
Supports multiple models from OpenAI, Anthropic, Google, and Azure.

Cons:

Free tier comes with usage limits. You might hit the limit if you frequently use the AI feature in your project.
May generate less accurate commit messages compared to GitHub Copilot.
No Ollama support.

czg

czg is a tool based on the popular Commitizen framework, improved with AI capabilities. It helps you write structured and consistent commit messages using a guided workflow.

You can easily install it via NPM, and it works with both OpenAI and Ollama, allowing you to choose the AI model for generating commit messages.

After you’ve installed it and configured it, you can run:

czg ai

If you’re using Ollama, the output depends on your chosen model. For better results, I recommend using ones with code capabilities like qwen2.5-coder, yi-coder, or codellama. Larger models generally provide more accurate messages.

Accept the commit message, and it will create the commit for you.

czg AI commit message tool example with Ollama integration

Pros:

Full support for Commitizen configuration.
Supports emojis.
Supports both OpenAI and Ollama.
Free and open-source.

Cons:

Designed to generate commits with Commitizen config and specification.
Configuration may not be straightforward for some users, but it should be fine if you’re a developer and already familiar with command lines.

OpenCommit

OpenCommit is a handy CLI tool that helps you write Git commit messages for your code changes quickly. Instead of spending time thinking about what to write, it analyzes your changes and creates a commit message in seconds.

It supports popular OpenAI models like GPT-3 and 4, and you can even use local models with Ollama. It’s easy to set up and can add fun emojis to your messages if you like.

OpenCommit CLI tool generating a Git commit message based on code changes

Pros:

OpenAI API and Ollama support.
Uses Conventional Commits by default, configurable through global variables or CLI options.
GitHub Action support.
Free and open-source.

Cons:

The messages generated often do not accurately describe the changeset. They’re sometimes redundant or poorly formatted.

AI Commits

This is another CLI tool that helps you automatically generate clear and relevant commit messages based on your code changes. It uses OpenAI to analyze the changes and suggest suitable commit messages for you.

Like czg and OpenCommit, you can install it via NPM. Once installed and set up, you can use the following command:

aicommits

AI Commits CLI tool generating Git commit messages

Pros:

Easy to install and straightforward to configure. You only need to set your OpenAI key, and you’re all set.
Supports Conventional Commits using CLI parameters.
Free and open-source.

Cons:

Does not support Ollama.

Final Thoughts

Choosing the right AI commit tool depends on your workflow and preferences.

For example, if you’re already using GitHub Copilot (like I am!) or Cursor, it’s probably worth sticking with the editor for commit messages-it’s convenient and integrated. On the other hand, if your team follows strict commit standards (like projects using Commitizen), tools like czg or AI Commits might be a better choice.

Most of these tools are free or offer trials, so experiment! Try one for a day or two and see how it feels. You’ll probably save more time (and brainpower) than you’d expect.

The post 5 Best AI-Powered Git Commit Message Tools Compared appeared first on Hongkiat.

Monster Hunter Wilds review: a new open-world design feels fresh and exciting

Original Source: https://www.creativebloq.com/entertainment/gaming/monster-hunter-wilds-review-a-new-open-world-design-feels-fresh

The most accessible Monster Hunter game to date.

AI-Assisted Coding for iOS Development: CursorAI and Upcoming Swift Assist

Original Source: https://www.sitepoint.com/ai-assisted-coding-for-ios-development/?utm_source=rss

Discover how AI tools like CursorAI are transforming iOS development, with practical tips and real-world examples for balancing AI assistance with developer expertise.

Continue reading
AI-Assisted Coding for iOS Development: CursorAI and Upcoming Swift Assist
on SitePoint.

Designer Spotlight: Ivan Gorbunov

Original Source: https://tympanus.net/codrops/2025/02/21/designer-spotlight-ivan-gorbunov/

Explore Ivan Gorbunov’s approach to minimalism, creativity, and functional design.

Human-Centered Design Through AI-Assisted Usability Testing: Reality Or Fiction?

Original Source: https://smashingmagazine.com/2025/02/human-centered-design-ai-assisted-usability-testing/

Unmoderated usability testing has been steadily growing more popular with the assistance of online UX research tools. Allowing participants to complete usability testing without a moderator, at their own pace and convenience, can have a number of advantages.

The first is the liberation from a strict schedule and the availability of moderators, meaning that a lot more participants can be recruited on a more cost-effective and quick basis. It also lets your team see how users interact with your solution in their natural environment, with the setup of their own devices. Overcoming the challenges of distance and differences in time zones in order to obtain data from all around the globe also becomes much easier.

However, forgoing the use of moderators also has its drawbacks. The moderator brings flexibility, as well as a human touch into usability testing. Since they are in the same (virtual) space as the participants, the moderator usually has a good idea of what’s going on. They can react in real-time depending on what they witness the participant do and say. A moderator can carefully remind the participants to vocalize their thoughts. To the participant, thinking aloud in front of a moderator can also feel more natural than just talking to themselves. When the participant does something interesting, the moderator can prompt them for further comment.

Meanwhile, a traditional unmoderated study lacks such flexibility. In order to complete tasks, participants receive a fixed set of instructions. Once they are done, they can be asked to complete a static questionnaire, and that’s it.

The feedback that the research & design team receives will be completely dependent on what information the participants provide on their own. Because of this, the phrasing of instructions and questions in unmoderated testing is extremely crucial. Although, even if everything is planned out perfectly, the lack of adaptive questioning means that a lot of the information will still remain unsaid, especially with regular people who are not trained in providing user feedback.

If the usability test participant misunderstands a question or doesn’t answer completely, the moderator can always ask for a follow-up to get more information. A question then arises: Could something like that be handled by AI to upgrade unmoderated testing?

Generative AI could present a new, potentially powerful tool for addressing this dilemma once we consider their current capabilities. Large language models (LLMs), in particular, can lead conversations that can appear almost humanlike. If LLMs could be incorporated into usability testing to interactively enhance the collection of data by conversing with the participant, they might significantly augment the ability of researchers to obtain detailed personal feedback from great numbers of people. With human participants as the source of the actual feedback, this is an excellent example of human-centered AI as it keeps humans in the loop.

There are quite a number of gaps in the research of AI in UX. To help with fixing this, we at UXtweak research have conducted a case study aimed at investigating whether AI could generate follow-up questions that are meaningful and result in valuable answers from the participants.

Asking participants follow-up questions to extract more in-depth information is just one portion of the moderator’s responsibilities. However, it is a reasonably-scoped subproblem for our evaluation since it encapsulates the ability of the moderator to react to the context of the conversation in real time and to encourage participants to share salient information.

Experiment Spotlight: Testing GPT-4 In Real-Time Feedback

The focus of our study was on the underlying principles rather than any specific commercial AI solution for unmoderated usability testing. After all, AI models and prompts are being tuned constantly, so findings that are too narrow may become irrelevant in a week or two after a new version gets updated. However, since AI models are also a black box based on artificial neural networks, the method by which they generate their specific output is not transparent.

Our results can show what you should be wary of to verify that an AI solution that you use can actually deliver value rather than harm. For our study, we used GPT-4, which at the time of the experiment was the most up-to-date model by OpenAI, also capable of fulfilling complex prompts (and, in our experience, dealing with some prompts better than the more recent GPT-4o).

In our experiment, we conducted a usability test with a prototype of an e-commerce website. The tasks involved the common user flow of purchasing a product.

Note: See our article published in the International Journal of Human-Computer Interaction for more detailed information about the prototype, tasks, questions, and so on).

In this setting, we compared the results with three conditions:

A regular static questionnaire made up of three pre-defined questions (Q1, Q2, Q3), serving as an AI-free baseline. Q1 was open-ended, asking the participants to narrate their experiences during the task. Q2 and Q3 can be considered non-adaptive follow-ups to Q1 since they asked participants more directly about usability issues and to identify things that they did not like.
The question Q1, serving as a seed for up to three GPT-4-generated follow-up questions as the alternative to Q2 and Q3.
All three pre-defined questions, Q1, Q2, and Q3, each used as a seed for its own GPT-4 follow-up.

The following prompt was used to generate the follow-up questions:

To assess the impact of the AI follow-up questions, we then compared the results on both a quantitative and a qualitative basis. One of the measures that we analyzed is informativeness — ratings of the responses based on how useful they are at elucidating new usability issues encountered by the user.

As seen in the figure below, the informativeness dropped significantly between the seed questions and their AI follow-up. The follow-ups rarely helped identify a new issue, although they did help elaborate further details.

The emotional reactions of the participants offer another perspective on AI-generated follow-up questions. Our analysis of the prevailing emotional valence based on the phrasing of answers revealed that, at first, the answers started with a neutral sentiment. Afterward, the sentiment shifted toward the negative.

In the case of the pre-defined questions Q2 and Q3, this could be seen as natural. While question Seed 1 was open-ended, asking the participants to explain what they did during the task, Q2 and Q3 focused more on the negative — usability issues and other disliked aspects. Curiously, the follow-up chains generally received even more negative receptions than their seed questions, and not for the same reason.

Frustration was common as participants interacted with the GPT-4-driven follow-up questions. This is rather critical, considering that frustration with the testing process can sidetrack participants from taking usability testing seriously, hinder meaningful feedback, and introduce a negative bias.

A major aspect that participants were frustrated with was redundancy. Repetitiveness, such as re-explaining the same usability issue, was quite common. While pre-defined follow-up questions yielded 27-28% of repeated answers (it’s likely that participants already mentioned aspects they disliked during the open-ended Q1), AI-generated questions yielded 21%.

That’s not that much of an improvement, given that the comparison is made to questions that literally could not adapt to prevent repetition at all. Furthermore, when AI follow-up questions were added to obtain more elaborate answers for every pre-defined question, the repetition ratio rose further to 35%. In the variant with AI, participants also rated the questions as significantly less reasonable.

Answers to AI-generated questions contained a lot of statements like “I already said that” and “The obvious AI questions ignored my previous responses.”

The prevalence of repetition within the same group of questions (the seed question, its follow-up questions, and all of their answers) can be seen as particularly problematic since the GPT-4 prompt had been provided with all the information available in this context. This demonstrates that a number of the follow-up questions were not sufficiently distinct and lacked the direction that would warrant them being asked.

Insights From The Study: Successes And Pitfalls

To summarize the usefulness of AI-generated follow-up questions in usability testing, there are both good and bad points.

Successes:

Generative AI (GPT-4) excels at refining participant answers with contextual follow-ups.
Depth of qualitative insights can be enhanced.

Challenges:

Limited capacity to uncover new issues beyond pre-defined questions.
Participants can easily grow frustrated with repetitive or generic follow-ups.

While extracting answers that are a bit more elaborate is a benefit, it can be easily overshadowed if the lack of question quality and relevance is too distracting. This can potentially inhibit participants’ natural behavior and the relevance of feedback if they’re focusing on the AI.

Therefore, in the following section, we discuss what to be careful of, whether you are picking an existing AI tool to assist you with unmoderated usability testing or implementing your own AI prompts or even models for a similar purpose.

Recommendations For Practitioners

Context is the end-all and be-all when it comes to the usefulness of follow-up questions. Most of the issues that we identified with the AI follow-up questions in our study can be tied to the ignorance of proper context in one shape or another.

Based on real blunders that GPT-4 made while generating questions in our study, we have meticulously collected and organized a list of the types of context that these questions were missing. Whether you’re looking to use an existing AI tool or are implementing your own system to interact with participants in unmoderated studies, you are strongly encouraged to use this list as a high-level checklist. With it as the guideline, you can assess whether the AI models and prompts at your disposal can ask reasonable, context-sensitive follow-up questions before you entrust them with interacting with real participants.

Without further ado, these are the relevant types of context:

General Usability Testing Context.
The AI should incorporate standard principles of usability testing in its questions. This may appear obvious, and it actually is. But it needs to be said, given that we have encountered issues related to this context in our study. For example, the questions should not be leading, ask participants for design suggestions, or ask them to predict their future behavior in completely hypothetical scenarios (behavioral research is much more accurate for that).
Usability Testing Goal Context.
Different usability tests have different goals depending on the stage of the design, business goals, or features being tested. Each follow-up question and the participant’s time used in answering it are valuable resources. They should not be wasted on going off-topic. For example, in our study, we were evaluating a prototype of a website with placeholder photos of a product. When the AI starts asking participants about their opinion of the displayed fake products, such information is useless to us.
User Task Context.
Whether the tasks in your usability testing are goal-driven or open and exploratory, their nature should be properly reflected in follow-up questions. When the participants have freedom, follow-up questions could be useful for understanding their motivations. By contrast, if your AI tool foolishly asks the participants why they did something closely related to the task (e.g., placing the specific item they were supposed to buy into the cart), you will seem just as foolish by association for using it.
Design Context.
Detailed information about the tested design (e.g., prototype, mockup, website, app) can be indispensable for making sure that follow-up questions are reasonable. Follow-up questions should require input from the participant. They should not be answerable just by looking at the design. Interesting aspects of the design could also be reflected in the topics to focus on. For example, in our study, the AI would occasionally ask participants why they believed a piece of information that was very prominently displayed in the user interface, making the question irrelevant in context.
Interaction Context.
If Design Context tells you what the participant could potentially see and do during the usability test, Interaction Context comprises all their actual actions, including their consequences. This could incorporate the video recording of the usability test, as well as the audio recording of the participant thinking aloud. The inclusion of interaction context would allow follow-up questions to build on the information that the participant already provided and to further clarify their decisions. For example, if a participant does not successfully complete a task, follow-up questions could be directed at investigating the cause, even as the participant continues to believe they have fulfilled their goal.
Previous Question Context.
Even when the questions you ask them are mutually distinct, participants can find logical associations between various aspects of their experience, especially since they don’t know what you will ask them next. A skilled moderator may decide to skip a question that a participant already answered as part of another question, instead focusing on further clarifying the details. AI follow-up questions should be capable of doing the same to avoid the testing from becoming a repetitive slog.
Question Intent Context.
Participants routinely answer questions in a way that misses their original intent, especially if the question is more open-ended. A follow-up can spin the question from another angle to retrieve the intended information. However, if the participant’s answer is technically a valid reply but only to the word rather than the spirit of the question, the AI can miss this fact. Clarifying the intent could help address this.

When assessing a third-party AI tool, a question to ask is whether the tool allows you to provide all of the contextual information explicitly.

If AI does not have an implicit or explicit source of context, the best it can do is make biased and untransparent guesses that can result in irrelevant, repetitive, and frustrating questions.

Even if you can provide the AI tool with the context (or if you are crafting the AI prompt yourself), that does not necessarily mean that the AI will do as you expect, apply the context in practice, and approach its implications correctly. For example, as demonstrated in our study, when a history of the conversation was provided within the scope of a question group, there was still a considerable amount of repetition.

The most straightforward way to test the contextual responsiveness of a specific AI model is simply by conversing with it in a way that relies on context. Fortunately, most natural human conversation already depends on context heavily (saying everything would take too long otherwise), so that should not be too difficult. What is key is focusing on the varied types of context to identify what the AI model can and cannot do.

The seemingly overwhelming number of potential combinations of varied types of context could pose the greatest challenge for AI follow-up questions.

For example, human moderators may decide to go against the general rules by asking less open-ended questions to obtain information that is essential for the goals of their research while also understanding the tradeoffs.

In our study, we have observed that if the AI asked questions that were too generically open-ended as a follow-up to seed questions that were open-ended themselves, without a significant enough shift in perspective, this resulted in repetition, irrelevancy, and — therefore — frustration.

The fine-tuning of the AI models to achieve an ability to resolve various types of contextual conflict appropriately could be seen as a reliable metric by which the quality of the AI generator of follow-up questions could be measured.

Researcher control is also key since tougher decisions that are reliant on the researcher’s vision and understanding should remain firmly in the researcher’s hands. Because of this, a combination of static and AI-driven questions with complementary strengths and weaknesses could be the way to unlock richer insights.

A focus on contextual sensitivity validation can be seen as even more important while considering the broader social aspects. Among certain people, the trend-chasing and the general overhype of AI by the industry have led to a backlash against AI. AI skeptics have a number of valid concerns, including usefulness, ethics, data privacy, and the environment. Some usability testing participants may be unaccepting or even outwardly hostile toward encounters with AI.

Therefore, for the successful incorporation of AI into research, it will be essential to demonstrate it to the users as something that is both reasonable and helpful. Principles of ethical research remain as relevant as ever. Data needs to be collected and processed with the participant’s consent and not breach the participant’s privacy (e.g. so that sensitive data is not used for training AI models without permission).

Conclusion: What’s Next For AI In UX?

So, is AI a game-changer that could break down the barrier between moderated and unmoderated usability research? Maybe one day. The potential is certainly there. When AI follow-up questions work as intended, the results are exciting. Participants can become more talkative and clarify potentially essential details.

To any UX researcher who’s familiar with the feeling of analyzing vaguely phrased feedback and wishing that they could have been there to ask one more question to drive the point home, an automated solution that could do this for them may seem like a dream. However, we should also exercise caution since the blind addition of AI without testing and oversight can introduce a slew of biases. This is because the relevance of follow-up questions is dependent on all sorts of contexts.

Humans need to keep holding the reins in order to ensure that the research is based on actual solid conclusions and intents. The opportunity lies in the synergy that can arise from usability researchers and designers whose ability to conduct unmoderated usability testing could be significantly augmented.

Humans + AI = Better Insights

The best approach to advocate for is likely a balanced one. As UX researchers and designers, humans should continue to learn how to use AI as a partner in uncovering insights. This article can serve as a jumping-off point, providing a list of the AI-driven technique’s potential weak points to be aware of, to monitor, and to improve on.