Human-Centered Design Through AI-Assisted Usability Testing: Reality Or Fiction?

Original Source: https://smashingmagazine.com/2025/02/human-centered-design-ai-assisted-usability-testing/

Unmoderated usability testing has been steadily growing more popular with the assistance of online UX research tools. Allowing participants to complete usability testing without a moderator, at their own pace and convenience, can have a number of advantages.

The first is the liberation from a strict schedule and the availability of moderators, meaning that a lot more participants can be recruited on a more cost-effective and quick basis. It also lets your team see how users interact with your solution in their natural environment, with the setup of their own devices. Overcoming the challenges of distance and differences in time zones in order to obtain data from all around the globe also becomes much easier.

However, forgoing the use of moderators also has its drawbacks. The moderator brings flexibility, as well as a human touch into usability testing. Since they are in the same (virtual) space as the participants, the moderator usually has a good idea of what’s going on. They can react in real-time depending on what they witness the participant do and say. A moderator can carefully remind the participants to vocalize their thoughts. To the participant, thinking aloud in front of a moderator can also feel more natural than just talking to themselves. When the participant does something interesting, the moderator can prompt them for further comment.

Meanwhile, a traditional unmoderated study lacks such flexibility. In order to complete tasks, participants receive a fixed set of instructions. Once they are done, they can be asked to complete a static questionnaire, and that’s it.

The feedback that the research & design team receives will be completely dependent on what information the participants provide on their own. Because of this, the phrasing of instructions and questions in unmoderated testing is extremely crucial. Although, even if everything is planned out perfectly, the lack of adaptive questioning means that a lot of the information will still remain unsaid, especially with regular people who are not trained in providing user feedback.

If the usability test participant misunderstands a question or doesn’t answer completely, the moderator can always ask for a follow-up to get more information. A question then arises: Could something like that be handled by AI to upgrade unmoderated testing?

Generative AI could present a new, potentially powerful tool for addressing this dilemma once we consider their current capabilities. Large language models (LLMs), in particular, can lead conversations that can appear almost humanlike. If LLMs could be incorporated into usability testing to interactively enhance the collection of data by conversing with the participant, they might significantly augment the ability of researchers to obtain detailed personal feedback from great numbers of people. With human participants as the source of the actual feedback, this is an excellent example of human-centered AI as it keeps humans in the loop.

There are quite a number of gaps in the research of AI in UX. To help with fixing this, we at UXtweak research have conducted a case study aimed at investigating whether AI could generate follow-up questions that are meaningful and result in valuable answers from the participants.

Asking participants follow-up questions to extract more in-depth information is just one portion of the moderator’s responsibilities. However, it is a reasonably-scoped subproblem for our evaluation since it encapsulates the ability of the moderator to react to the context of the conversation in real time and to encourage participants to share salient information.

Experiment Spotlight: Testing GPT-4 In Real-Time Feedback

The focus of our study was on the underlying principles rather than any specific commercial AI solution for unmoderated usability testing. After all, AI models and prompts are being tuned constantly, so findings that are too narrow may become irrelevant in a week or two after a new version gets updated. However, since AI models are also a black box based on artificial neural networks, the method by which they generate their specific output is not transparent.

Our results can show what you should be wary of to verify that an AI solution that you use can actually deliver value rather than harm. For our study, we used GPT-4, which at the time of the experiment was the most up-to-date model by OpenAI, also capable of fulfilling complex prompts (and, in our experience, dealing with some prompts better than the more recent GPT-4o).

In our experiment, we conducted a usability test with a prototype of an e-commerce website. The tasks involved the common user flow of purchasing a product.

Note: See our article published in the International Journal of Human-Computer Interaction for more detailed information about the prototype, tasks, questions, and so on).

In this setting, we compared the results with three conditions:

A regular static questionnaire made up of three pre-defined questions (Q1, Q2, Q3), serving as an AI-free baseline. Q1 was open-ended, asking the participants to narrate their experiences during the task. Q2 and Q3 can be considered non-adaptive follow-ups to Q1 since they asked participants more directly about usability issues and to identify things that they did not like.
The question Q1, serving as a seed for up to three GPT-4-generated follow-up questions as the alternative to Q2 and Q3.
All three pre-defined questions, Q1, Q2, and Q3, each used as a seed for its own GPT-4 follow-up.

The following prompt was used to generate the follow-up questions:

To assess the impact of the AI follow-up questions, we then compared the results on both a quantitative and a qualitative basis. One of the measures that we analyzed is informativeness — ratings of the responses based on how useful they are at elucidating new usability issues encountered by the user.

As seen in the figure below, the informativeness dropped significantly between the seed questions and their AI follow-up. The follow-ups rarely helped identify a new issue, although they did help elaborate further details.

The emotional reactions of the participants offer another perspective on AI-generated follow-up questions. Our analysis of the prevailing emotional valence based on the phrasing of answers revealed that, at first, the answers started with a neutral sentiment. Afterward, the sentiment shifted toward the negative.

In the case of the pre-defined questions Q2 and Q3, this could be seen as natural. While question Seed 1 was open-ended, asking the participants to explain what they did during the task, Q2 and Q3 focused more on the negative — usability issues and other disliked aspects. Curiously, the follow-up chains generally received even more negative receptions than their seed questions, and not for the same reason.

Frustration was common as participants interacted with the GPT-4-driven follow-up questions. This is rather critical, considering that frustration with the testing process can sidetrack participants from taking usability testing seriously, hinder meaningful feedback, and introduce a negative bias.

A major aspect that participants were frustrated with was redundancy. Repetitiveness, such as re-explaining the same usability issue, was quite common. While pre-defined follow-up questions yielded 27-28% of repeated answers (it’s likely that participants already mentioned aspects they disliked during the open-ended Q1), AI-generated questions yielded 21%.

That’s not that much of an improvement, given that the comparison is made to questions that literally could not adapt to prevent repetition at all. Furthermore, when AI follow-up questions were added to obtain more elaborate answers for every pre-defined question, the repetition ratio rose further to 35%. In the variant with AI, participants also rated the questions as significantly less reasonable.

Answers to AI-generated questions contained a lot of statements like “I already said that” and “The obvious AI questions ignored my previous responses.”

The prevalence of repetition within the same group of questions (the seed question, its follow-up questions, and all of their answers) can be seen as particularly problematic since the GPT-4 prompt had been provided with all the information available in this context. This demonstrates that a number of the follow-up questions were not sufficiently distinct and lacked the direction that would warrant them being asked.

Insights From The Study: Successes And Pitfalls

To summarize the usefulness of AI-generated follow-up questions in usability testing, there are both good and bad points.

Successes:

Generative AI (GPT-4) excels at refining participant answers with contextual follow-ups.
Depth of qualitative insights can be enhanced.

Challenges:

Limited capacity to uncover new issues beyond pre-defined questions.
Participants can easily grow frustrated with repetitive or generic follow-ups.

While extracting answers that are a bit more elaborate is a benefit, it can be easily overshadowed if the lack of question quality and relevance is too distracting. This can potentially inhibit participants’ natural behavior and the relevance of feedback if they’re focusing on the AI.

Therefore, in the following section, we discuss what to be careful of, whether you are picking an existing AI tool to assist you with unmoderated usability testing or implementing your own AI prompts or even models for a similar purpose.

Recommendations For Practitioners

Context is the end-all and be-all when it comes to the usefulness of follow-up questions. Most of the issues that we identified with the AI follow-up questions in our study can be tied to the ignorance of proper context in one shape or another.

Based on real blunders that GPT-4 made while generating questions in our study, we have meticulously collected and organized a list of the types of context that these questions were missing. Whether you’re looking to use an existing AI tool or are implementing your own system to interact with participants in unmoderated studies, you are strongly encouraged to use this list as a high-level checklist. With it as the guideline, you can assess whether the AI models and prompts at your disposal can ask reasonable, context-sensitive follow-up questions before you entrust them with interacting with real participants.

Without further ado, these are the relevant types of context:

General Usability Testing Context.
The AI should incorporate standard principles of usability testing in its questions. This may appear obvious, and it actually is. But it needs to be said, given that we have encountered issues related to this context in our study. For example, the questions should not be leading, ask participants for design suggestions, or ask them to predict their future behavior in completely hypothetical scenarios (behavioral research is much more accurate for that).
Usability Testing Goal Context.
Different usability tests have different goals depending on the stage of the design, business goals, or features being tested. Each follow-up question and the participant’s time used in answering it are valuable resources. They should not be wasted on going off-topic. For example, in our study, we were evaluating a prototype of a website with placeholder photos of a product. When the AI starts asking participants about their opinion of the displayed fake products, such information is useless to us.
User Task Context.
Whether the tasks in your usability testing are goal-driven or open and exploratory, their nature should be properly reflected in follow-up questions. When the participants have freedom, follow-up questions could be useful for understanding their motivations. By contrast, if your AI tool foolishly asks the participants why they did something closely related to the task (e.g., placing the specific item they were supposed to buy into the cart), you will seem just as foolish by association for using it.
Design Context.
Detailed information about the tested design (e.g., prototype, mockup, website, app) can be indispensable for making sure that follow-up questions are reasonable. Follow-up questions should require input from the participant. They should not be answerable just by looking at the design. Interesting aspects of the design could also be reflected in the topics to focus on. For example, in our study, the AI would occasionally ask participants why they believed a piece of information that was very prominently displayed in the user interface, making the question irrelevant in context.
Interaction Context.
If Design Context tells you what the participant could potentially see and do during the usability test, Interaction Context comprises all their actual actions, including their consequences. This could incorporate the video recording of the usability test, as well as the audio recording of the participant thinking aloud. The inclusion of interaction context would allow follow-up questions to build on the information that the participant already provided and to further clarify their decisions. For example, if a participant does not successfully complete a task, follow-up questions could be directed at investigating the cause, even as the participant continues to believe they have fulfilled their goal.
Previous Question Context.
Even when the questions you ask them are mutually distinct, participants can find logical associations between various aspects of their experience, especially since they don’t know what you will ask them next. A skilled moderator may decide to skip a question that a participant already answered as part of another question, instead focusing on further clarifying the details. AI follow-up questions should be capable of doing the same to avoid the testing from becoming a repetitive slog.
Question Intent Context.
Participants routinely answer questions in a way that misses their original intent, especially if the question is more open-ended. A follow-up can spin the question from another angle to retrieve the intended information. However, if the participant’s answer is technically a valid reply but only to the word rather than the spirit of the question, the AI can miss this fact. Clarifying the intent could help address this.

When assessing a third-party AI tool, a question to ask is whether the tool allows you to provide all of the contextual information explicitly.

If AI does not have an implicit or explicit source of context, the best it can do is make biased and untransparent guesses that can result in irrelevant, repetitive, and frustrating questions.

Even if you can provide the AI tool with the context (or if you are crafting the AI prompt yourself), that does not necessarily mean that the AI will do as you expect, apply the context in practice, and approach its implications correctly. For example, as demonstrated in our study, when a history of the conversation was provided within the scope of a question group, there was still a considerable amount of repetition.

The most straightforward way to test the contextual responsiveness of a specific AI model is simply by conversing with it in a way that relies on context. Fortunately, most natural human conversation already depends on context heavily (saying everything would take too long otherwise), so that should not be too difficult. What is key is focusing on the varied types of context to identify what the AI model can and cannot do.

The seemingly overwhelming number of potential combinations of varied types of context could pose the greatest challenge for AI follow-up questions.

For example, human moderators may decide to go against the general rules by asking less open-ended questions to obtain information that is essential for the goals of their research while also understanding the tradeoffs.

In our study, we have observed that if the AI asked questions that were too generically open-ended as a follow-up to seed questions that were open-ended themselves, without a significant enough shift in perspective, this resulted in repetition, irrelevancy, and — therefore — frustration.

The fine-tuning of the AI models to achieve an ability to resolve various types of contextual conflict appropriately could be seen as a reliable metric by which the quality of the AI generator of follow-up questions could be measured.

Researcher control is also key since tougher decisions that are reliant on the researcher’s vision and understanding should remain firmly in the researcher’s hands. Because of this, a combination of static and AI-driven questions with complementary strengths and weaknesses could be the way to unlock richer insights.

A focus on contextual sensitivity validation can be seen as even more important while considering the broader social aspects. Among certain people, the trend-chasing and the general overhype of AI by the industry have led to a backlash against AI. AI skeptics have a number of valid concerns, including usefulness, ethics, data privacy, and the environment. Some usability testing participants may be unaccepting or even outwardly hostile toward encounters with AI.

Therefore, for the successful incorporation of AI into research, it will be essential to demonstrate it to the users as something that is both reasonable and helpful. Principles of ethical research remain as relevant as ever. Data needs to be collected and processed with the participant’s consent and not breach the participant’s privacy (e.g. so that sensitive data is not used for training AI models without permission).

Conclusion: What’s Next For AI In UX?

So, is AI a game-changer that could break down the barrier between moderated and unmoderated usability research? Maybe one day. The potential is certainly there. When AI follow-up questions work as intended, the results are exciting. Participants can become more talkative and clarify potentially essential details.

To any UX researcher who’s familiar with the feeling of analyzing vaguely phrased feedback and wishing that they could have been there to ask one more question to drive the point home, an automated solution that could do this for them may seem like a dream. However, we should also exercise caution since the blind addition of AI without testing and oversight can introduce a slew of biases. This is because the relevance of follow-up questions is dependent on all sorts of contexts.

Humans need to keep holding the reins in order to ensure that the research is based on actual solid conclusions and intents. The opportunity lies in the synergy that can arise from usability researchers and designers whose ability to conduct unmoderated usability testing could be significantly augmented.

Humans + AI = Better Insights

The best approach to advocate for is likely a balanced one. As UX researchers and designers, humans should continue to learn how to use AI as a partner in uncovering insights. This article can serve as a jumping-off point, providing a list of the AI-driven technique’s potential weak points to be aware of, to monitor, and to improve on.

Telgea's New Branding: Connecting Continents with a Unified Visual Identity

Original Source: https://abduzeedo.com/telgeas-new-branding-connecting-continents-unified-visual-identity

Telgea’s New Branding: Connecting Continents with a Unified Visual Identity

abduzeedo
02/17 — 2025

Explore Telgea’s new branding and visual identity, designed by Signifly, which uses a minimalist sphere to symbolize a unified telecommunications ecosystem.  

Telecommunications company Telgea recently unveiled a new branding and visual identity, created by design studio Signifly. The new identity is built around the concept of connection, drawing inspiration from the ancient supercontinent Pangea. This is reflected in the name Telgea, a fusion of “Pangea” and “telecom,” and the minimalist sphere logo, known as “The Dot.”  

The Dot serves as a visual metaphor for Telgea’s mission to simplify telecommunications for businesses worldwide. It represents the convergence of continents into a single, unified telecommunications ecosystem. This minimalist yet powerful symbol is versatile and adaptable, making it ideal for a modern digital brand.  

Data Visualization: Making Complex Data Intuitive

Beyond its role as the logo, The Dot is also a key element in Telgea’s data visualization system. It is integrated across various UI components and marketing materials, transforming complex telecom data into an intuitive and engaging format. This approach allows Telgea to present data clearly and concisely without sacrificing clarity.  

Balancing Corporate and Playful

Telgea’s new identity strikes a balance between corporate and playful. The primary system, with its clean lines, minimalist logo, and straightforward typeface, conveys a professional and credible image. However, moments of creativity shine through in the use of vibrant colors and a secondary set of playful icons. These elements add personality to 404 pages, onboarding flows, and team merchandise, helping Telgea stand out from traditional telcos while maintaining a business-first image.  

A Global Brand for a Connected World

Telgea’s new branding and visual identity effectively communicate the company’s mission to connect businesses across the globe. The Dot, as a symbol of convergence, embodies the idea of a unified telecommunications ecosystem. By integrating this symbol into its data visualization system, Telgea makes complex information accessible and engaging. This thoughtful and well-executed branding strategy positions Telgea as a leader in the telecommunications industry.  

See the full Telgea branding and visual identity by Signifly here: signifly.com/work/telgea

Branding and visual identity artifacts

Meeting European Accessibility Act (EAA) Standards: A Developer’s Checklist

Original Source: https://www.sitepoint.com/meeting-european-accessibility-act-eaa-standards/?utm_source=rss

Ensure your digital products meet the EAA standards before the June 2025 deadline. This guide provides a practical checklist for developers to audit, fix, and maintain accessibility compliance while improving user experience.

Continue reading
Meeting European Accessibility Act (EAA) Standards: A Developer’s Checklist
on SitePoint.

Random Forest Algorithm in Machine Learning

Original Source: https://www.sitepoint.com/random-forest-algorithm-in-machine-learning/?utm_source=rss

Learn how the Random Forest algorithm works in machine learning. Discover its key features, advantages, Python implementation, and real-world applications.

Continue reading
Random Forest Algorithm in Machine Learning
on SitePoint.

Hybrid e-reader game console design puts a delightful twist on a retro format

Original Source: https://www.creativebloq.com/entertainment/gaming/unusual-hybrid-e-reader-game-console-design-puts-a-delightful-twist-on-a-retro-format

Choose your own adventure – and make your own game.

How I Created A Popular WordPress Theme And Coined The Term “Hero Section” (Without Realizing It)

Original Source: https://smashingmagazine.com/2025/02/popular-wordpress-theme-term-hero-section/

I don’t know how it is for other designers, but when I start a new project, there’s always this moment where I just sit there and stare. Nothing. No idea. Empty.

People often think that “creativity” is some kind of magic that suddenly comes out of nowhere, like a lightning strike from the sky. But I can tell you that’s not how it works — at least not for me. I’ve learned how to “hack” my creativity. It’s no longer random but more like a process. And one part of that process led me to create what we now call the “Hero Section.”

The Birth Of The Hero Section

If I’m being honest, I don’t even know exactly how I came up with the name “Hero.” It felt more like an epiphany than a conscious decision. At the time, I was working on the Brooklyn theme, and Bootstrap was gaining popularity. I wasn’t a huge fan of Bootstrap, not because it’s bad, but because I found it more complicated to work with than writing my own CSS. Ninety-five percent of the CSS and HTML in Brooklyn is custom-written, devoid of any framework.

But there was one part of Bootstrap that stuck with me: the Jumbotron class. The name felt a bit odd, but I understood its purpose — to create something big and attention-grabbing. That stuck in my mind, and like lightning, the word “Hero” came to me.

Why Hero? A hero is a figure that demands attention. It’s bold, strong, and memorable, which is everything I wanted Brooklyn’s intro section to be. At first, I envisioned a “Hero Button.” Still, I realized the concept could be much broader: it could encompass the entire intro section, setting the tone for the website and drawing the visitor’s focus to the most important message.

The term “Banner” was another option, but it felt generic and uninspired. A Hero, on the other hand, is a force to reckon with. So, I committed to the idea.

From Banner To Hero Section

Back in 2013, most websites called their intro sections a “Banner” or “Header.” At best, you’d see a single image with a title, maybe a subtitle, and a button. Sliders were also popular, cycling through multiple banners with different content. But I wanted Brooklyn’s intro to be more than just a banner — it had to make a lasting impression.

So, I redefined it:

HTML Structure
I named the section <section class=”hero”>. This wasn’t just a banner or a slider; it was a Hero Section.
CSS Customization
Everything within the section followed the Hero concept: .hero-slogan, .hero-title, .hero-description, .hero-btn. I coded it all from scratch, making sure it had a cohesive and distinct identity.
Marketing Language
I didn’t stop at the code. I used the word “Hero” everywhere, including Brooklyn’s documentation, the theme description, the landing page, and the featured images.

At the time, Brooklyn was attracting tens of thousands of visitors per day on ThemeForest, which is the storefront I use to make the theme available for sale. It quickly became a top seller, selling like hotcakes. Naturally, people started asking, “What’s a Hero Section?” It was a new term, and I loved explaining the concept.

The Hero Section had become sort of like a hook that made Brooklyn more alluring, and we sold a lot of copies of the theme because of it.

What I Didn’t Know About The Hero’s Future

At the time, I intentionally used the term “Hero” in Brooklyn’s code and marketing because I wanted it to stand out. I made sure it was everywhere: in the <section> tags, in class names like .hero-title and .hero-description, and on Brooklyn’s landing page and product description.

But honestly, I didn’t realize just how big the term would become. I wasn’t thinking about carving it into stone or reserving it as something unique to Brooklyn. That kind of forward-thinking wasn’t on my radar back then. All I wanted was to grab attention and make Brooklyn stand out.

Over time, we kept adding new variations to the Hero Section. For example, we introduced the Hero Video, allowing users to add video backgrounds to their Heroes — something that felt bold and innovative at the time. We also added the Hero Slider, a simple image slider within the Hero Section, giving users more flexibility to create dynamic intros.

Brooklyn even had a small Hero Builder integrated directly into the theme — something I believe is still unique to this day.

Looking back, it’s clear I missed an opportunity to cement the Hero Section as a signature feature of Brooklyn. Once I saw other authors adopting the term, I stopped emphasizing Brooklyn’s role in popularizing it. I thought the concept spoke for itself.

How The Hero Went Mainstream

One of the most fascinating things about the Hero Section is how quickly the term caught on. Brooklyn’s popularity gave the Hero Section massive exposure. Designers and developers started noticing it, and soon, other theme authors began adopting the term in their products.

Brooklyn wasn’t just another theme. It was one of the top sellers on ThemeForest, the world’s largest marketplace for digital goods, with millions of users. And I didn’t just use the term “Hero” once or twice — I used it everywhere: descriptions, featured images, and documentation. I made sure people saw it. Before long, I noticed that more and more themes used the term to describe large intro sections in their work.

Today, the Hero Section is everywhere. It’s a standard in web design recognized by designers and developers worldwide. While I can’t say I invented the concept, I’m proud to have played a key role in bringing it into the mainstream.

Lessons From Building A Hero

Creating the Hero Section taught me a lot about design, creativity, and marketing. Here are the key takeaways:

Start Simple: The Hero Section started as a simple idea — a way to focus attention. You don’t need a complex plan to create something impactful.
Commit to Your Ideas: Once I decided on the term Hero, I committed to it in the code, the design, and the marketing. Consistency made it stick.
Bold Names Matter: Naming the section “Hero” instead of “Banner” gave it a personality and purpose. Names can define how users perceive a design.
Constantly Evolve: Adding features like the Hero Video and Hero Slider kept the concept fresh and adaptable to user needs.
Don’t Ignore Your Role: If you introduce something new, own it. I should have continued promoting Brooklyn as a Hero pioneer to solidify its legacy.

Inspiration Isn’t Magic; It’s Hard Work

Inspiration often comes from unexpected places. For me, it came from questioning a Bootstrap class name and reimagining it into something new. The Hero Section wasn’t just a product of creative brilliance — it was the result of persistence, experimentation, and a bit of luck.

What’s the one element you’ve created that you’re most proud of? I’d love to hear your stories in the comments below!

The latest iPad mini is great for working on the go – and now there's $100 off

Original Source: https://www.creativebloq.com/tech/phones-tablets/the-latest-ipad-mini-is-great-for-working-on-the-go-and-now-theres-usd100-off

This deal matches the cheapest price yet.

Where Are My Favorites on Printful? (How to Find Saved Products Fast)

Original Source: https://ecommerce-platforms.com/articles/where-are-my-favorites-on-printful-how-to-find-saved-products-fast

Your Favorites on Printful should be under your profile icon in the top-right corner after logging in. If you don’t see them, check Product Templates instead—Printful often moves features around. If it’s still missing, clear your browser cache or try a different device.

I’ve been using Printful for over a decade, and I know their UI changes can be frustrating. If you can’t find Favorites, here’s exactly where to look and what to do next.

How to Find Your Favorites on Printful

Log in to your Printful account.

Click on your profile icon (top-right corner).

Look for “Favorites” or “Saved Products” in the dropdown menu.

If it’s missing, go to Product Templates—Printful sometimes shifts saved items there.

Why You Can’t See Your Favorites

If your Favorites are gone, one of these is likely the reason:

Wrong Account – Double-check you’re logged into the correct Printful account.

Printful UI Changes – They move things around often; check Product Templates instead.

Cache or Browser Issues – Clear cookies, switch browsers, or disable extensions.

Alternative Ways to Save Your Favorite Products

Since Printful’s Favorites isn’t always reliable, try:

Product Templates – The best way to save designs for quick access.

Store Collections – If you use Shopify or Etsy, organize them there.

Browser Bookmarks – Quickest method for finding frequently used products.

Still Can’t Find Favorites? Contact Printful Support

If nothing works, reach out to Printful Support via:

Live Chat (on their website)

Email: support@printful.com

Help Center: Printful FAQ

Final Thoughts

Printful’s Favorites might not always be where you expect, but checking under Product Templates usually solves the problem. If that doesn’t work, clear your browser cache or contact support.

I’ve been through this plenty of times—hope this saves you some frustration!

The post Where Are My Favorites on Printful? (How to Find Saved Products Fast) appeared first on Ecommerce Platforms.

Blasphemy: A Radical Take on Editorial Design and Culinary Rebellion

Original Source: https://abduzeedo.com/blasphemy-radical-take-editorial-design-and-culinary-rebellion

Blasphemy: A Radical Take on Editorial Design and Culinary Rebellion

abduzeedo
02/07 — 2025

Blasphemy redefines editorial design with chaotic layouts, biblical subversions, and bold typography that mirror its rebellious culinary ethos.

Cookbooks tend to follow a formula—clean layouts, mouthwatering imagery, and neatly structured recipes. Blasphemy does the opposite. Created by Olly Wood, Creative Director at McCann London, this 104-page manifesto shatters conventions in both food and design. It is an editorial design experiment that rejects traditional aesthetic norms, mirroring the book’s provocative culinary themes.

A Sacred Subversion of Design

The visual language of Blasphemy takes direct inspiration from the Bible—historically regarded as the ultimate book of rules—but upends its authority with irreverent twists. Layouts ignore classic typesetting conventions, verse annotations are misused, and overlays create intentional chaos. Misalignments, pixelated images, and jarring typographic choices work together to establish a visual rebellion that perfectly complements the book’s culinary defiance.

Typography plays a crucial role in setting the tone. Harsh kerning, mirrored letters, and heavy distortions introduce a punk aesthetic, making each spread unpredictable. The chaotic design isn’t just aesthetic; it’s part of the storytelling. Each visual inconsistency echoes the book’s core theme—challenging the status quo, whether in food or design.

The Art of Imperfection

Rather than the polished precision typical of high-end cookbooks, Blasphemy embraces imperfection as a design principle. Typos, grammatical quirks, and broken alignments aren’t mistakes—they are deliberate choices meant to reinforce the book’s anti-establishment stance. The writing style reflects this philosophy, favoring raw, unfiltered narration over conventional editorial polish.

Photography, too, follows an unconventional approach. Pixelation, heavy Photoshop edits, and distorted compositions replace the usual hyper-realistic food imagery. The result is a visual experience that feels more like an art piece than a cookbook.

A Statement in Print

Beyond its disruptive design, Blasphemy is a statement about what editorial design can be. The hardcover is bound in woven fabric, reminiscent of old religious texts. The print production—handled by Yintuan Net Printing Co. in Fuzhou, China—emphasizes craftsmanship while retaining the book’s intentionally unpolished aesthetic.

This fusion of radical editorial design and culinary rebellion makes Blasphemy more than a cookbook. It’s a manifesto for those willing to challenge norms, whether in the kitchen or in print.

Blasphemy can be purchased for £20 at www.blasphemybook.com

Editorial design artifacts

Integrations: From Simple Data Transfer To Modern Composable Architectures

Original Source: https://smashingmagazine.com/2025/02/integrations-from-simple-data-transfer-to-composable-architectures/

This article is a sponsored by Storyblok

When computers first started talking to each other, the methods were remarkably simple. In the early days of the Internet, systems exchanged files via FTP or communicated via raw TCP/IP sockets. This direct approach worked well for simple use cases but quickly showed its limitations as applications grew more complex.

# Basic socket server example
import socket

server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind((‘localhost’, 12345))
server_socket.listen(1)

while True:
connection, address = server_socket.accept()
data = connection.recv(1024)
# Process data
connection.send(response)

The real breakthrough in enabling complex communication between computers on a network came with the introduction of Remote Procedure Calls (RPC) in the 1980s. RPC allowed developers to call procedures on remote systems as if they were local functions, abstracting away the complexity of network communication. This pattern laid the foundation for many of the modern integration approaches we use today.

At its core, RPC implements a client-server model where the client prepares and serializes a procedure call with parameters, sends the message to a remote server, the server deserializes and executes the procedure, and then sends the response back to the client.

Here’s a simplified example using Python’s XML-RPC.

# Server
from xmlrpc.server import SimpleXMLRPCServer

def calculate_total(items):
return sum(items)

server = SimpleXMLRPCServer((“localhost”, 8000))
server.register_function(calculate_total)
server.serve_forever()

# Client
import xmlrpc.client

proxy = xmlrpc.client.ServerProxy(“http://localhost:8000/”)
try:
result = proxy.calculate_total([1, 2, 3, 4, 5])
except ConnectionError:
print(“Network error occurred”)

RPC can operate in both synchronous (blocking) and asynchronous modes.

Modern implementations such as gRPC support streaming and bi-directional communication. In the example below, we define a gRPC service called Calculator with two RPC methods, Calculate, which takes a Numbers message and returns a Result message, and CalculateStream, which sends a stream of Result messages in response.

// protobuf
service Calculator {
rpc Calculate(Numbers) returns (Result);
rpc CalculateStream(Numbers) returns (stream Result);
}

Modern Integrations: The Rise Of Web Services And SOA

The late 1990s and early 2000s saw the emergence of Web Services and Service-Oriented Architecture (SOA). SOAP (Simple Object Access Protocol) became the standard for enterprise integration, introducing a more structured approach to system communication.

<?xml version=”1.0″?>
<soap:Envelope xmlns:soap=”http://www.w3.org/2003/05/soap-envelope”>
<soap:Header>
</soap:Header>
<soap:Body>
<m:GetStockPrice xmlns:m=”http://www.example.org/stock”>
<m:StockName>IBM</m:StockName>
</m:GetStockPrice>
</soap:Body>
</soap:Envelope>

While SOAP provided robust enterprise features, its complexity, and verbosity led to the development of simpler alternatives, especially the REST APIs that dominate Web services communication today.

But REST is not alone. Let’s have a look at some modern integration patterns.

RESTful APIs

REST (Representational State Transfer) has become the de facto standard for Web APIs, providing a simple, stateless approach to manipulating resources. Its simplicity and HTTP-based nature make it ideal for web applications.

First defined by Roy Fielding in 2000 as an architectural style on top of the Web’s standard protocols, its constraints align perfectly with the goals of the modern Web, such as performance, scalability, reliability, and visibility: client and server separated by an interface and loosely coupled, stateless communication, cacheable responses.

In modern applications, the most common implementations of the REST protocol are based on the JSON format, which is used to encode messages for requests and responses.

// Request
async function fetchUserData() {
const response = await fetch(‘https://api.example.com/users/123’);
const userData = await response.json();
return userData;
}

// Response
{
“id”: “123”,
“name”: “John Doe”,
“_links”: {
“self”: { “href”: “/users/123” },
“orders”: { “href”: “/users/123/orders” },
“preferences”: { “href”: “/users/123/preferences” }
}
}

GraphQL

GraphQL emerged from Facebook’s internal development needs in 2012 before being open-sourced in 2015. Born out of the challenges of building complex mobile applications, it addressed limitations in traditional REST APIs, particularly the issues of over-fetching and under-fetching data.

At its core, GraphQL is a query language and runtime that provides a type system and declarative data fetching, allowing the client to specify exactly what it wants to fetch from the server.

// graphql
type User {
id: ID!
name: String!
email: String!
posts: [Post!]!
}

type Post {
id: ID!
title: String!
content: String!
author: User!
publishDate: String!
}

query GetUserWithPosts {
user(id: “123”) {
name
posts(last: 3) {
title
publishDate
}
}
}

Often used to build complex UIs with nested data structures, mobile applications, or microservices architectures, it has proven effective at handling complex data requirements at scale and offers a growing ecosystem of tools.

Webhooks

Modern applications often require real-time updates. For example, e-commerce apps need to update inventory levels when a purchase is made, or content management apps need to refresh cached content when a document is edited. Traditional request-response models can struggle to meet these demands because they rely on clients’ polling servers for updates, which is inefficient and resource-intensive.

Webhooks and event-driven architectures address these needs more effectively. Webhooks let servers send real-time notifications to clients or other systems when specific events happen. This reduces the need for continuous polling. Event-driven architectures go further by decoupling application components. Services can publish and subscribe to events asynchronously, and this makes the system more scalable, responsive, and simpler.

import fastify from ‘fastify’;

const server = fastify();
server.post(‘/webhook’, async (request, reply) => {
const event = request.body;

if (event.type === ‘content.published’) {
await refreshCache();
}

return reply.code(200).send();
});

This is a simple Node.js function that uses Fastify to set up a web server. It responds to the endpoint /webhook, checks the type field of the JSON request, and refreshes a cache if the event is of type content.published.

With all this background information and technical knowledge, it’s easier to picture the current state of web application development, where a single, monolithic app is no longer the answer to business needs, but a new paradigm has emerged: Composable Architecture.

Composable Architecture And Headless CMSs

This evolution has led us to the concept of composable architecture, where applications are built by combining specialized services. This is where headless CMS solutions have a clear advantage, serving as the perfect example of how modern integration patterns come together.

Headless CMS platforms separate content management from content presentation, allowing you to build specialized frontends relying on a fully-featured content backend. This decoupling facilitates content reuse, independent scaling, and the flexibility to use a dedicated technology or service for each part of the system.

Take Storyblok as an example. Storyblok is a headless CMS designed to help developers build flexible, scalable, and composable applications. Content is exposed via API, REST, or GraphQL; it offers a long list of events that can trigger a webhook. Editors are happy with a great Visual Editor, where they can see changes in real time, and many integrations are available out-of-the-box via a marketplace.

Imagine this ContentDeliveryService in your app, where you can interact with Storyblok’s REST API using the open source JS Client:

import StoryblokClient from “storyblok-js-client”;

class ContentDeliveryService {
constructor(private storyblok: StoryblokClient) {}

async getPageContent(slug: string) {
const { data } = await this.storyblok.get(cdn/stories/${slug}, {
version: ‘published’,
resolve_relations: ‘featured-products.products’
});

return data.story;
}

async getRelatedContent(tags: string[]) {
const { data } = await this.storyblok.get(‘cdn/stories’, {
version: ‘published’,
with_tag: tags.join(‘,’)
});

return data.stories;
}
}

The last piece of the puzzle is a real example of integration.

Again, many are already available in the Storyblok marketplace, and you can easily control them from the dashboard. However, to fully leverage the Composable Architecture, we can use the most powerful tool in the developer’s hand: code.

Let’s imagine a modern e-commerce platform that uses Storyblok as its content hub, Shopify for inventory and orders, Algolia for product search, and Stripe for payments.

Once each account is set up and we have our access tokens, we could quickly build a front-end page for our store. This isn’t production-ready code, but just to get a quick idea, let’s use React to build the page for a single product that integrates our services.

First, we should initialize our clients:

import StoryblokClient from “storyblok-js-client”;
import { algoliasearch } from “algoliasearch”;
import Client from “shopify-buy”;

const storyblok = new StoryblokClient({
accessToken: “your_storyblok_token”,
});
const algoliaClient = algoliasearch(
“your_algolia_app_id”,
“your_algolia_api_key”,
);
const shopifyClient = Client.buildClient({
domain: “your-shopify-store.myshopify.com”,
storefrontAccessToken: “your_storefront_access_token”,
});

Given that we created a blok in Storyblok that holds product information such as the product_id, we could write a component that takes the productSlug, fetches the product content from Storyblok, the inventory data from Shopify, and some related products from the Algolia index:

async function fetchProduct() {
// get product from Storyblok
const { data } = await storyblok.get(cdn/stories/${productSlug});

// fetch inventory from Shopify
const shopifyInventory = await shopifyClient.product.fetch(
data.story.content.product_id
);

// fetch related products using Algolia
const { hits } = await algoliaIndex.search(“products”, {
filters: category:${data.story.content.category},
});
}

We could then set a simple component state:

const [productData, setProductData] = useState(null);
const [inventory, setInventory] = useState(null);
const [relatedProducts, setRelatedProducts] = useState([]);

useEffect(() =>
// …
// combine fetchProduct() with setState to update the state
// …

fetchProduct();
}, [productSlug]);

And return a template with all our data:

<h1>{productData.content.title}</h1>
<p>{productData.content.description}</p>
<h2>Price: ${inventory.variants[0].price}</h2>
<h3>Related Products</h3>
<ul>
{relatedProducts.map((product) => (
<li key={product.objectID}>{product.name}</li>
))}
</ul>

We could then use an event-driven approach and create a server that listens to our shop events and processes the checkout with Stripe (credits to Manuel Spigolon for this tutorial):

const stripe = require(‘stripe’)

module.exports = async function plugin (app, opts) {
const stripeClient = stripe(app.config.STRIPE_PRIVATE_KEY)

server.post(‘/create-checkout-session’, async (request, reply) => {
const session = await stripeClient.checkout.sessions.create({
line_items: […], // from request.body
mode: ‘payment’,
success_url: “https://your-site.com/success”,
cancel_url: “https://your-site.com/cancel”,
})

return reply.redirect(303, session.url)
})
// …

And with this approach, each service is independent of the others, which helps us achieve our business goals (performance, scalability, flexibility) with a good developer experience and a smaller and simpler application that’s easier to maintain.

Conclusion

The integration between headless CMSs and modern web services represents the current and future state of high-performance web applications. By using specialized, decoupled services, developers can focus on business logic and user experience. A composable ecosystem is not only modular but also resilient to the evolving needs of the modern enterprise.

These integrations highlight the importance of mastering API-driven architectures and understanding how different tools can harmoniously fit into a larger tech stack.

In today’s digital landscape, success lies in choosing tools that offer flexibility and efficiency, adapt to evolving demands, and create applications that are future-proof against the challenges of tomorrow.

If you want to dive deeper into the integrations you can build with Storyblok and other services, check out Storyblok’s integrations page. You can also take your projects further by creating your own plugins with Storyblok’s plugin development resources.