PowerA controller stock CLEARANCE – up to 46% off Mario, Fortnite and more
Original Source: https://www.creativebloq.com/entertainment/gaming/powera-controller-stock-clearance-up-to-46-percent-off-mario-fortnite-and-more
Fortnite, Mario and more.
Original Source: https://www.creativebloq.com/entertainment/gaming/powera-controller-stock-clearance-up-to-46-percent-off-mario-fortnite-and-more
Fortnite, Mario and more.
Original Source: https://webdesignerdepot.com/the-10-foundational-ux-principles-every-designer-should-know/
If your app or website makes people feel confused, lost, or quietly scream into a pillow, your UX needs a reboot. These 10 timeless UX principles are the difference between digital love and digital rage-quitting. Designers, read this before you accidentally make another invisible button.
Original Source: https://smashingmagazine.com/2025/09/psychology-trust-ai-guide-measuring-designing-user-confidence/
Misuse and misplaced trust of AI is becoming an unfortunate common event. For example, lawyers trying to leverage the power of generative AI for research submit court filings citing multiple compelling legal precedents. The problem? The AI had confidently, eloquently, and completely fabricated the cases cited. The resulting sanctions and public embarrassment can become a viral cautionary tale, shared across social media as a stark example of AI’s fallibility.
This goes beyond a technical glitch; it’s a catastrophic failure of trust in AI tools in an industry where accuracy and trust are critical. The trust issue here is twofold — the law firms are submitting briefs in which they have blindly over-trusted the AI tool to return accurate information. The subsequent fallout can lead to a strong distrust in AI tools, to the point where platforms featuring AI might not be considered for use until trust is reestablished.
Issues with trusting AI aren’t limited to the legal field. We are seeing the impact of fictional AI-generated information in critical fields such as healthcare and education. On a more personal scale, many of us have had the experience of asking Siri or Alexa to perform a task, only to have it done incorrectly or not at all, for no apparent reason. I’m guilty of sending more than one out-of-context hands-free text to an unsuspecting contact after Siri mistakenly pulls up a completely different name than the one I’d requested.
With digital products moving to incorporate generative and agentic AI at an increasingly frequent rate, trust has become the invisible user interface. When it works, our interactions are seamless and powerful. When it breaks, the entire experience collapses, with potentially devastating consequences. As UX professionals, we’re on the front lines of a new twist on a common challenge. How do we build products that users can rely on? And how do we even begin to measure something as ephemeral as trust in AI?
Trust isn’t a mystical quality. It is a psychological construct built on predictable factors. I won’t dive deep into academic literature on trust in this article. However, it is important to understand that trust is a concept that can be understood, measured, and designed for. This article will provide a practical guide for UX researchers and designers. We will briefly explore the psychological anatomy of trust, offer concrete methods for measuring it, and provide actionable strategies for designing more trustworthy and ethical AI systems.
The Anatomy of Trust: A Psychological Framework for AI
To build trust, we must first understand its components. Think of trust like a four-legged stool. If any one leg is weak, the whole thing becomes unstable. Based on classic psychological models, we can adapt these “legs” for the AI context.
1. Ability (or Competence)
This is the most straightforward pillar: Does the AI have the skills to perform its function accurately and effectively? If a weather app is consistently wrong, you stop trusting it. If an AI legal assistant creates fictitious cases, it has failed the basic test of ability. This is the functional, foundational layer of trust.
2. Benevolence
This moves from function to intent. Does the user believe the AI is acting in their best interest? A GPS that suggests a toll-free route even if it’s a few minutes longer might be perceived as benevolent. Conversely, an AI that aggressively pushes sponsored products feels self-serving, eroding this sense of benevolence. This is where user fears, such as concerns about job displacement, directly challenge trust—the user starts to believe the AI is not on their side.
3. Integrity
Does AI operate on predictable and ethical principles? This is about transparency, fairness, and honesty. An AI that clearly states how it uses personal data demonstrates integrity. A system that quietly changes its terms of service or uses dark patterns to get users to agree to something violates integrity. An AI job recruiting tool that has subtle yet extremely harmful social biases, existing in the algorithm, violates integrity.
4. Predictability & Reliability
Can the user form a stable and accurate mental model of how the AI will behave? Unpredictability, even if the outcomes are occasionally good, creates anxiety. A user needs to know, roughly, what to expect. An AI that gives a radically different answer to the same question asked twice is unpredictable and, therefore, hard to trust.
The Trust Spectrum: The Goal of a Well-Calibrated Relationship
Our goal as UX professionals shouldn’t be to maximize trust at all costs. An employee who blindly trusts every email they receive is a security risk. Likewise, a user who blindly trusts every AI output can be led into dangerous situations, such as the legal briefs referenced at the beginning of this article. The goal is well-calibrated trust.
Think of it as a spectrum where the upper-mid level is the ideal state for a truly trustworthy product to achieve:
Active Distrust
The user believes the AI is incompetent or malicious. They will avoid it or actively work against it.
Suspicion & Scrutiny
The user interacts cautiously, constantly verifying the AI’s outputs. This is a common and often healthy state for users of new AI.
Calibrated Trust (The Ideal State)
This is the sweet spot. The user has an accurate understanding of the AI’s capabilities—its strengths and, crucially, its weaknesses. They know when to rely on it and when to be skeptical.
Over-trust & Automation Bias
The user unquestioningly accepts the AI’s outputs. This is where users follow flawed AI navigation into a field or accept a fictional legal brief as fact.
Our job is to design experiences that guide users away from the dangerous poles of Active Distrust and Over-trust and toward that healthy, realistic middle ground of Calibrated Trust.
The Researcher’s Toolkit: How to Measure Trust In AI
Trust feels abstract, but it leaves measurable fingerprints. Academics in the social sciences have done much to define both what trust looks like and how it might be measured. As researchers, we can capture these signals through a mix of qualitative, quantitative, and behavioral methods.
Qualitative Probes: Listening For The Language Of Trust
During interviews and usability tests, go beyond “Was that easy to use?” and listen for the underlying psychology. Here are some questions you can start using tomorrow:
To measure Ability:
“Tell me about a time this tool’s performance surprised you, either positively or negatively.”
To measure Benevolence:
“Do you feel this system is on your side? What gives you that impression?”
To measure Integrity:
“If this AI made a mistake, how would you expect it to handle it? What would be a fair response?”
To measure Predictability:
“Before you clicked that button, what did you expect the AI to do? How closely did it match your expectation?”
Investigating Existential Fears (The Job Displacement Scenario)
One of the most potent challenges to an AI’s Benevolence is the fear of job displacement. When a participant expresses this, it is a critical research finding. It requires a specific, ethical probing technique.
Imagine a participant says, “Wow, it does that part of my job pretty well. I guess I should be worried.”
An untrained researcher might get defensive or dismiss the comment. An ethical, trained researcher validates and explores:
“Thank you for sharing that; it’s a vital perspective, and it’s exactly the kind of feedback we need to hear. Can you tell me more about what aspects of this tool make you feel that way? In an ideal world, how would a tool like this work with you to make your job better, not to replace it?”
This approach respects the participant, validates their concern, and reframes the feedback into an actionable insight about designing a collaborative, augmenting tool rather than a replacement. Similarly, your findings should reflect the concern users expressed about replacement. We shouldn’t pretend this fear doesn’t exist, nor should we pretend that every AI feature is being implemented with pure intention. Users know better than that, and we should be prepared to argue on their behalf for how the technology might best co-exist within their roles.
Quantitative Measures: Putting A Number On Confidence
You can quantify trust without needing a data science degree. After a user completes a task with an AI, supplement your standard usability questions with a few simple Likert-scale items:
“The AI’s suggestion was reliable.” (1-7, Strongly Disagree to Strongly Agree)
“I am confident in the AI’s output.” (1-7)
“I understood why the AI made that recommendation.” (1-7)
“The AI responded in a way that I expected.” (1-7)
“The AI provided consistent responses over time.” (1-7)
Over time, these metrics can track how trust is changing as your product evolves.
Note: If you want to go beyond these simple questions that I’ve made up, there are numerous scales (measurements) of trust in technology that exist in academic literature. It might be an interesting endeavor to measure some relevant psychographic and demographic characteristics of your users and see how that correlates with trust in AI/your product. Table 1 at the end of the article contains four examples of current scales you might consider using to measure trust. You can decide which is best for your application, or you might pull some of the items from any of the scales if you aren’t looking to publish your findings in an academic journal, yet want to use items that have been subjected to some level of empirical scrutiny.
Behavioral Metrics: Observing What Users Do, Not Just What They Say
People’s true feelings are often revealed in their actions. You can use behaviors that reflect the specific context of use for your product. Here are a few general metrics that might apply to most AI tools that give insight into users’ behavior and the trust they place in your tool.
Correction Rate
How often do users manually edit, undo, or ignore the AI’s output? A high correction rate is a powerful signal of low trust in its Ability.
Verification Behavior
Do users switch to Google or open another application to double-check the AI’s work? This indicates they don’t trust it as a standalone source of truth. It can also potentially be positive that they are calibrating their trust in the system when they use it up front.
Disengagement
Do users turn the AI feature off? Do they stop using it entirely after one bad experience? This is the ultimate behavioral vote of no confidence.
Designing For Trust: From Principles to Pixels
Once you’ve researched and measured trust, you can begin to design for it. This means translating psychological principles into tangible interface elements and user flows.
Designing for Competence and Predictability
Set Clear Expectations
Use onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle. A simple “I’m still learning about [topic X], so please double-check my answers” can work wonders.
Show Confidence Levels
Instead of just giving an answer, have the AI signal its own uncertainty. A weather app that says “70% chance of rain” is more trustworthy than one that just says “It will rain” and is wrong. An AI could say, “I’m 85% confident in this summary,” or highlight sentences it’s less sure about.
The Role of Explainability (XAI) and Transparency
Explainability isn’t about showing users the code. It’s about providing a useful, human-understandable rationale for a decision.
Instead of:
“Here is your recommendation.”
Try:
“Because you frequently read articles about UX research methods, I’m recommending this new piece on measuring trust in AI.”
This addition transforms AI from an opaque oracle to a transparent logical partner.
Many of the popular AI tools (e.g., ChatGPT and Gemini) show the thinking that went into the response they provide to a user. Figure 3 shows the steps Gemini went through to provide me with a non-response when I asked it to help me generate the masterpiece displayed above in Figure 2. While this might be more information than most users care to see, it provides a useful resource for a user to audit how the response came to be, and it has provided me with instructions on how I might proceed to address my task.
Figure 4 shows an example of a scorecard OpenAI makes available as an attempt to increase users’ trust. These scorecards are available for each ChatGPT model and go into the specifics of how the models perform as it relates to key areas such as hallucinations, health-based conversations, and much more. In reading the scorecards closely, you will see that no AI model is perfect in any area. The user must remain in a trust but verify mode to make the relationship between human reality and AI work in a way that avoids potential catastrophe. There should never be blind trust in an LLM.
Designing For Trust Repair (Graceful Error Handling) And Not Knowing an Answer
Your AI will make mistakes.
Trust is not determined by the absence of errors, but by how those errors are handled.
Acknowledge Errors Humbly.
When the AI is wrong, it should be able to state that clearly. “My apologies, I misunderstood that request. Could you please rephrase it?” is far better than silence or a nonsensical answer.
Provide an Easy Path to Correction.
Make feedback mechanisms (like thumbs up/down or a correction box) obvious. More importantly, show that the feedback is being used. A “Thank you, I’m learning from your correction” can help rebuild trust after a failure. As long as this is true.
Likewise, your AI can’t know everything. You should acknowledge this to your users.
UX practitioners should work with the product team to ensure that honesty about limitations is a core product principle.
This can include the following:
Establish User-Centric Metrics: Instead of only measuring engagement or task completion, UXers can work with product managers to define and track metrics like:
Hallucination Rate: The frequency with which the AI provides verifiably false information.
Successful Fallback Rate: How often the AI correctly identifies its inability to answer and provides a helpful, honest alternative.
Prioritize the “I Don’t Know” Experience: UXers should frame the “I don’t know” response not as an error state, but as a critical feature. They must lobby for the engineering and content resources needed to design a high-quality, helpful fallback experience.
UX Writing And Trust
All of these considerations highlight the critical role of UX writing in the development of trustworthy AI. UX writers are the architects of the AI’s voice and tone, ensuring that its communication is clear, honest, and empathetic. They translate complex technical processes into user-friendly explanations, craft helpful error messages, and design conversational flows that build confidence and rapport. Without thoughtful UX writing, even the most technologically advanced AI can feel opaque and untrustworthy.
The words and phrases an AI uses are its primary interface with users. UX writers are uniquely positioned to shape this interaction, ensuring that every tooltip, prompt, and response contributes to a positive and trust-building experience. Their expertise in human-centered language and design is indispensable for creating AI systems that not only perform well but also earn and maintain the trust of their users.
A few key areas for UX writers to focus on when writing for AI include:
Prioritize Transparency
Clearly communicate the AI’s capabilities and limitations, especially when it’s still learning or if its responses are generated rather than factual. Use phrases that indicate the AI’s nature, such as “As an AI, I can…” or “This is a generated response.”
Design for Explainability
When the AI provides a recommendation, decision, or complex output, strive to explain the reasoning behind it in an understandable way. This builds trust by showing the user how the AI arrived at its conclusion.
Emphasize User Control
Empower users by providing clear ways to provide feedback, correct errors, or opt out of certain AI features. This reinforces the idea that the user is in control and the AI is a tool to assist them.
The Ethical Tightrope: The Researcher’s Responsibility
As the people responsible for understanding and advocating for users, we walk an ethical tightrope. Our work comes with profound responsibilities.
The Danger Of “Trustwashing”
We must draw a hard line between designing for calibrated trust and designing to manipulate users into trusting a flawed, biased, or harmful system. For example, if an AI system designed for loan approvals consistently discriminates against certain demographics but presents a user interface that implies fairness and transparency, this would be an instance of trustwashing.
Another example of trustwashing would be if an AI medical diagnostic tool occasionally misdiagnoses conditions, but the user interface makes it seem infallible. To avoid trustwashing, the system should clearly communicate the potential for error and the need for human oversight.
Our goal must be to create genuinely trustworthy systems, not just the perception of trust. Using these principles to lull users into a false sense of security is a betrayal of our professional ethics.
To avoid and prevent trustwashing, researchers and UX teams should:
Prioritize genuine transparency.
Clearly communicate the limitations, biases, and uncertainties of AI systems. Don’t overstate capabilities or obscure potential risks.
Conduct rigorous, independent evaluations.
Go beyond internal testing and seek external validation of system performance, fairness, and robustness.
Engage with diverse stakeholders.
Involve users, ethics experts, and impacted communities in the design, development, and evaluation processes to identify potential harms and build genuine trust.
Be accountable for outcomes.
Take responsibility for the societal impact of AI systems, even if unintended. Establish mechanisms for redress and continuous improvement.
Be accountable for outcomes.
Establish clear and accessible mechanisms for redress when harm occurs, ensuring that individuals and communities affected by AI decisions have avenues for recourse and compensation.
Educate the public.
Help users understand how AI works, its limitations, and what to look for when evaluating AI products.
Advocate for ethical guidelines and regulations.
Support the development and implementation of industry standards and policies that promote responsible AI development and prevent deceptive practices.
Be wary of marketing hype.
Critically assess claims made about AI systems, especially those that emphasize “trustworthiness” without clear evidence or detailed explanations.
Publish negative findings.
Don’t shy away from reporting challenges, failures, or ethical dilemmas encountered during research. Transparency about limitations is crucial for building long-term trust.
Focus on user empowerment.
Design systems that give users control, agency, and understanding rather than just passively accepting AI outputs.
The Duty To Advocate
When our research uncovers deep-seated distrust or potential harm — like the fear of job displacement — our job has only just begun. We have an ethical duty to advocate for that user. In my experience directing research teams, I’ve seen that the hardest part of our job is often carrying these uncomfortable truths into rooms where decisions are made. We must champion these findings and advocate for design and strategy shifts that prioritize user well-being, even when it challenges the product roadmap.
I personally try to approach presenting this information as an opportunity for growth and improvement, rather than a negative challenge.
For example, instead of stating “Users don’t trust our AI because they fear job displacement,” I might frame it as “Addressing user concerns about job displacement presents a significant opportunity to build deeper trust and long-term loyalty by demonstrating our commitment to responsible AI development and exploring features that enhance human capabilities rather than replace them.” This reframing can shift the conversation from a defensive posture to a proactive, problem-solving mindset, encouraging collaboration and innovative solutions that ultimately benefit both the user and the business.
It’s no secret that one of the more appealing areas for businesses to use AI is in workforce reduction. In reality, there will be many cases where businesses look to cut 10–20% of a particular job family due to the perceived efficiency gains of AI. However, giving users the opportunity to shape the product may steer it in a direction that makes them feel safer than if they do not provide feedback. We should not attempt to convince users they are wrong if they are distrustful of AI. We should appreciate that they are willing to provide feedback, creating an experience that is informed by the human experts who have long been doing the task being automated.
Conclusion: Building Our Digital Future On A Foundation Of Trust
The rise of AI is not the first major technological shift our field has faced. However, it presents one of the most significant psychological challenges of our current time. Building products that are not just usable but also responsible, humane, and trustworthy is our obligation as UX professionals.
Trust is not a soft metric. It is the fundamental currency of any successful human-technology relationship. By understanding its psychological roots, measuring it with rigor, and designing for it with intent and integrity, we can move from creating “intelligent” products to building a future where users can place their confidence in the tools they use every day. A trust that is earned and deserved.
Table 1: Published Academic Scales Measuring Trust In Automated Systems
Survey Tool Name
Focus
Key Dimensions of Trust
Citation
Trust in Automation Scale
12-item questionnaire to assess trust between people and automated systems.
Measures a general level of trust, including reliability, predictability, and confidence.
Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53–71.
Trust of Automated Systems Test (TOAST)
9-items used to measure user trust in a variety of automated systems, designed for quick administration.
Divided into two main subscales: Understanding (user’s comprehension of the system) and Performance (belief in the system’s effectiveness).
Wojton, H. M., Porter, D., Lane, S. T., Bieber, C., & Madhavan, P. (2020). Initial validation of the trust of automated systems test (TOAST). (PDF) The Journal of Social Psychology, 160(6), 735–750.
Trust in Automation Questionnaire
A 19-item questionnaire capable of predicting user reliance on automated systems. A 2-item subscale is available for quick assessments; the full tool is recommended for a more thorough analysis.
Measures 6 factors: Reliability, Understandability, Propensity to trust, Intentions of developers, Familiarity, Trust in automation
Körber, M. (2018). Theoretical considerations and development of a questionnaire to measure trust in automation. In Proceedings 20th Triennial Congress of the IEA. Springer.
Human Computer Trust Scale
12-item questionnaire created to provide an empirically sound tool for assessing user trust in technology.
Divided into two key factors:Benevolence and Competence: This dimension captures the positive attributes of the technologyPerceived Risk: This factor measures the user’s subjective assessment of the potential for negative consequences when using a technical artifact.
Siddharth Gulati, Sonia Sousa & David Lamas (2019): Design, development and evaluation of a human-computer trust scale, (PDF) Behaviour & Information Technology
Appendix A: Trust-Building Tactics Checklist
To design for calibrated trust, consider implementing the following tactics, organized by the four pillars of trust:
1. Ability (Competence) & Predictability
✅ Set Clear Expectations: Use onboarding, tooltips, and empty states to honestly communicate the AI’s strengths and weaknesses.
✅ Show Confidence Levels: Display the AI’s uncertainty (e.g., “70% chance,” “85% confident”) or highlight less certain parts of its output.
✅ Provide Explainability (XAI): Offer useful, human-understandable rationales for the AI’s decisions or recommendations (e.g., “Because you frequently read X, I’m recommending Y”).
✅ Design for Graceful Error Handling:
✅ Acknowledge errors humbly (e.g., “My apologies, I misunderstood that request.”).
✅ Provide easy paths to correction (e. ] g., prominent feedback mechanisms like thumbs up/down).
✅ Show that feedback is being used (e.g., “Thank you, I’m learning from your correction”).
✅ Design for “I Don’t Know” Responses:
✅ Acknowledge limitations honestly.
✅ Prioritize a high-quality, helpful fallback experience when the AI cannot answer.
✅ Prioritize Transparency: Clearly communicate the AI’s capabilities and limitations, especially if responses are generated.
2. Benevolence
✅ Address Existential Fears: When users express concerns (e.g., job displacement), validate their concerns and reframe the feedback into actionable insights about collaborative tools.
✅ Prioritize User Well-being: Advocate for design and strategy shifts that prioritize user well-being, even if it challenges the product roadmap.
✅ Emphasize User Control: Provide clear ways for users to give feedback, correct errors, or opt out of AI features.
3. Integrity
✅ Adhere to Ethical Principles: Ensure the AI operates on predictable, ethical principles, demonstrating fairness and honesty.
✅ Prioritize Genuine Transparency: Clearly communicate the limitations, biases, and uncertainties of AI systems; avoid overstating capabilities or obscuring risks.
✅ Conduct Rigorous, Independent Evaluations: Seek external validation of system performance, fairness, and robustness to mitigate bias.
✅ Engage Diverse Stakeholders: Involve users, ethics experts, and impacted communities in the design and evaluation processes.
✅ Be Accountable for Outcomes: Establish clear mechanisms for redress and continuous improvement for societal impacts, even if unintended.
✅ Educate the Public: Help users understand how AI works, its limitations, and how to evaluate AI products.
✅ Advocate for Ethical Guidelines: Support the development and implementation of industry standards and policies that promote responsible AI.
✅ Be Wary of Marketing Hype: Critically assess claims about AI “trustworthiness” and demand verifiable data.
✅ Publish Negative Findings: Be transparent about challenges, failures, or ethical dilemmas encountered during research.
4. Predictability & Reliability
✅ Set Clear Expectations: Use onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle.
✅ Show Confidence Levels: Instead of just giving an answer, have the AI signal its own uncertainty.
✅ Provide Explainability (XAI) and Transparency: Offer a useful, human-understandable rationale for AI decisions.
✅ Design for Graceful Error Handling: Acknowledge errors humbly and provide easy paths to correction.
✅ Prioritize the “I Don’t Know” Experience: Frame “I don’t know” as a feature and design a high-quality fallback experience.
✅ Prioritize Transparency (UX Writing): Clearly communicate the AI’s capabilities and limitations, especially when it’s still learning or if responses are generated.
✅ Design for Explainability (UX Writing): Explain the reasoning behind AI recommendations, decisions, or complex outputs.
Original Source: https://www.creativebloq.com/creative-inspiration/advertising/are-christmas-ads-doomed-this-year
It’s not all gloom, says this advertising pro.
Original Source: https://smashingmagazine.com/2025/09/ambient-animations-web-design-principles-implementation/
Unlike timeline-based animations, which tell stories across a sequence of events, or interaction animations that are triggered when someone touches something, ambient animations are the kind of passive movements you might not notice at first. But, they make a design look alive in subtle ways.
In an ambient animation, elements might subtly transition between colours, move slowly, or gradually shift position. Elements can appear and disappear, change size, or they could rotate slowly.
Ambient animations aren’t intrusive; they don’t demand attention, aren’t distracting, and don’t interfere with what someone’s trying to achieve when they use a product or website. They can be playful, too, making someone smile when they catch sight of them. That way, ambient animations add depth to a brand’s personality.
To illustrate the concept of ambient animations, I’ve recreated the cover of a Quick Draw McGraw comic book (PDF) as a CSS/SVG animation. The comic was published by Charlton Comics in 1971, and, being printed, these characters didn’t move, making them ideal candidates to transform into ambient animations.
FYI: Original cover artist Ray Dirgo was best known for his work drawing Hanna-Barbera characters for Charlton Comics during the 1970s. Ray passed away in 2000 at the age of 92. He outlived Charlton Comics, which went out of business in 1986, and DC Comics acquired its characters.
Tip: You can view the complete ambient animation code on CodePen.
Choosing Elements To Animate
Not everything on a page or in a graphic needs to move, and part of designing an ambient animation is knowing when to stop. The trick is to pick elements that lend themselves naturally to subtle movement, rather than forcing motion into places where it doesn’t belong.
Natural Motion Cues
When I’m deciding what to animate, I look for natural motion cues and think about when something would move naturally in the real world. I ask myself: “Does this thing have weight?”, “Is it flexible?”, and “Would it move in real life?” If the answer’s “yes,” it’ll probably feel right if it moves. There are several motion cues in Ray Dirgo’s cover artwork.
For example, the peace pipe Quick Draw’s puffing on has two feathers hanging from it. They swing slightly left and right by three degrees as the pipe moves, just like real feathers would.
#quick-draw-pipe {
animation: quick-draw-pipe-rotate 6s ease-in-out infinite alternate;
}
@keyframes quick-draw-pipe-rotate {
0% { transform: rotate(3deg); }
100% { transform: rotate(-3deg); }
}
#quick-draw-feather-1 {
animation: quick-draw-feather-1-rotate 3s ease-in-out infinite alternate;
}
#quick-draw-feather-2 {
animation: quick-draw-feather-2-rotate 3s ease-in-out infinite alternate;
}
@keyframes quick-draw-feather-1-rotate {
0% { transform: rotate(3deg); }
100% { transform: rotate(-3deg); }
}
@keyframes quick-draw-feather-2-rotate {
0% { transform: rotate(-3deg); }
100% { transform: rotate(3deg); }
}
Atmosphere, Not Action
I often choose elements or decorative details that add to the vibe but don’t fight for attention.
Ambient animations aren’t about signalling to someone where they should look; they’re about creating a mood.
Here, the chief slowly and subtly rises and falls as he puffs on his pipe.
#chief {
animation: chief-rise-fall 3s ease-in-out infinite alternate;
}
@keyframes chief-group-rise-fall {
0% { transform: translateY(0); }
100% { transform: translateY(-20px); }
}
For added effect, the feather on his head also moves in time with his rise and fall:
#chief-feather-1 {
animation: chief-feather-1-rotate 3s ease-in-out infinite alternate;
}
#chief-feather-2 {
animation: chief-feather-2-rotate 3s ease-in-out infinite alternate;
}
@keyframes chief-feather-1-rotate {
0% { transform: rotate(0deg); }
100% { transform: rotate(-9deg); }
}
@keyframes chief-feather-2-rotate {
0% { transform: rotate(0deg); }
100% { transform: rotate(9deg); }
}
Playfulness And Fun
One of the things I love most about ambient animations is how they bring fun into a design. They’re an opportunity to demonstrate personality through playful details that make people smile when they notice them.
Take a closer look at the chief, and you might spot his eyebrows raising and his eyes crossing as he puffs hard on his pipe. Quick Draw’s eyebrows also bounce at what look like random intervals.
#quick-draw-eyebrow {
animation: quick-draw-eyebrow-raise 5s ease-in-out infinite;
}
@keyframes quick-draw-eyebrow-raise {
0%, 20%, 60%, 100% { transform: translateY(0); }
10%, 50%, 80% { transform: translateY(-10px); }
}
Keep Hierarchy In Mind
Motion draws the eye, and even subtle movements have a visual weight. So, I reserve the most obvious animations for elements that I need to create the biggest impact.
Smoking his pipe clearly has a big effect on Quick Draw McGraw, so to demonstrate this, I wrapped his elements — including his pipe and its feathers — within a new SVG group, and then I made that wobble.
#quick-draw-group {
animation: quick-draw-group-wobble 6s ease-in-out infinite;
}
@keyframes quick-draw-group-wobble {
0% { transform: rotate(0deg); }
15% { transform: rotate(2deg); }
30% { transform: rotate(-2deg); }
45% { transform: rotate(1deg); }
60% { transform: rotate(-1deg); }
75% { transform: rotate(0.5deg); }
100% { transform: rotate(0deg); }
}
Then, to emphasise this motion, I mirrored those values to wobble his shadow:
#quick-draw-shadow {
animation: quick-draw-shadow-wobble 6s ease-in-out infinite;
}
@keyframes quick-draw-shadow-wobble {
0% { transform: rotate(0deg); }
15% { transform: rotate(-2deg); }
30% { transform: rotate(2deg); }
45% { transform: rotate(-1deg); }
60% { transform: rotate(1deg); }
75% { transform: rotate(-0.5deg); }
100% { transform: rotate(0deg); }
}
Apply Restraint
Just because something can be animated doesn’t mean it should be. When creating an ambient animation, I study the image and note the elements where subtle motion might add life. I keep in mind the questions: “What’s the story I’m telling? Where does movement help, and when might it become distracting?”
Remember, restraint isn’t just about doing less; it’s about doing the right things less often.
Layering SVGs For Export
In “Smashing Animations Part 4: Optimising SVGs,” I wrote about the process I rely on to “prepare, optimise, and structure SVGs for animation.” When elements are crammed into a single SVG file, they can be a nightmare to navigate. Locating a specific path or group can feel like searching for a needle in a haystack.
That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section.
I start by exporting background elements, optimising them, adding class and ID attributes, and pasting their code into my SVG file.
Then, I export elements that often stay static or move as groups, like the chief and Quick Draw McGraw.
Before finally exporting, naming, and adding details, like Quick Draw’s pipe, eyes, and his stoned sparkles.
Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues as they all slot into place automatically.
Implementing Ambient Animations
You don’t need an animation framework or library to add ambient animations to a project. Most of the time, all you’ll need is a well-prepared SVG and some thoughtful CSS.
But, let’s start with the SVG. The key is to group elements logically and give them meaningful class or ID attributes, which act as animation hooks in the CSS. For this animation, I gave every moving part its own identifier like #quick-draw-tail or #chief-smoke-2. That way, I could target exactly what I needed without digging through the DOM like a raccoon in a trash can.
Once the SVG is set up, CSS does most of the work. I can use @keyframes for more expressive movement, or animation-delay to simulate randomness and stagger timings. The trick is to keep everything subtle and remember I’m not animating for attention, I’m animating for atmosphere.
Remember that most ambient animations loop continuously, so they should be lightweight and performance-friendly. And of course, it’s good practice to respect users who’ve asked for less motion. You can wrap your animations in an @media prefers-reduced-motion query so they only run when they’re welcome.
@media (prefers-reduced-motion: no-preference) {
#quick-draw-shadow {
animation: quick-draw-shadow-wobble 6s ease-in-out infinite;
}
}
It’s a small touch that’s easy to implement, and it makes your designs more inclusive.
Ambient Animation Design Principles
If you want your animations to feel ambient, more like atmosphere than action, it helps to follow a few principles. These aren’t hard and fast rules, but rather things I’ve learned while animating smoke, sparkles, eyeballs, and eyebrows.
Keep Animations Slow And Smooth
Ambient animations should feel relaxed, so use longer durations and choose easing curves that feel organic. I often use ease-in-out, but cubic Bézier curves can also be helpful when you want a more relaxed feel and the kind of movements you might find in nature.
Loop Seamlessly And Avoid Abrupt Changes
Hard resets or sudden jumps can ruin the mood, so if an animation loops, ensure it cycles smoothly. You can do this by matching start and end keyframes, or by setting the animation-direction to alternate the value so the animation plays forward, then back.
Use Layering To Build Complexity
A single animation might be boring. Five subtle animations, each on separate layers, can feel rich and alive. Think of it like building a sound mix — you want variation in rhythm, tone, and timing. In my animation, sparkles twinkle at varying intervals, smoke curls upward, feathers sway, and eyes boggle. Nothing dominates, and each motion plays its small part in the scene.
Avoid Distractions
The point of an ambient animation is that it doesn’t dominate. It’s a background element and not a call to action. If someone’s eyes are drawn to a raised eyebrow, it’s probably too much, so dial back the animation until it feels like something you’d only catch if you’re really looking.
Consider Accessibility And Performance
Check prefers-reduced-motion, and don’t assume everyone’s device can handle complex animations. SVG and CSS are light, but things like blur filters and drop shadows, and complex CSS animations can still tax lower-powered devices. When an animation is purely decorative, consider adding aria-hidden=”true” to keep it from cluttering up the accessibility tree.
Quick On The Draw
Ambient animation is like seasoning on a great dish. It’s the pinch of salt you barely notice, but you’d miss when it’s gone. It doesn’t shout, it whispers. It doesn’t lead, it lingers. It’s floating smoke, swaying feathers, and sparkles you catch in the corner of your eye. And when it’s done well, ambient animation adds personality to a design without asking for applause.
Now, I realise that not everyone needs to animate cartoon characters. So, in part two, I’ll share how I created animations for several recent client projects. Until next time, if you’re crafting an illustration or working with SVG, ask yourself: What would move if this were real? Then animate just that. Make it slow and soft. Keep it ambient.
You can view the complete ambient animation code on CodePen.
Original Source: https://webdesignerdepot.com/designing-for-dribbble-killed-real-web-creativity/
Dribbble didn’t inspire a new era of web creativity—it domesticated it. In chasing pretty pixels for clout, we forgot how to design for actual humans. The web is now full of sexy shots and broken experiences—and it’s time we admit that designing for Dribbble killed real creativity…
Original Source: https://ecommerce-platforms.com/articles/creativehub-vs-printful
CreativeHub and Printful are two popular platforms for print-on-demand (POD) businesses — but which is the better choice for your online store?
Whether you’re an artist selling fine art prints or an ecommerce entrepreneur looking to expand your product range, choosing the right POD provider plays a major role in how you price, promote, and scale your business.
In this comparison, I’ll break down CreativeHub vs Printful based on core features like product range, print quality, pricing, branding, and store integrations.
I’ve tested both platforms and analyzed real-world use cases to give you a clear recommendation.
Quick Verdict: Printful vs CreativeHub
Printful – Best overall if you’re running a general ecommerce store with varied products and custom branding.
CreativeHub – Best for professional artists focused solely on high-quality art prints.
FeaturePrintfulCreativeHubProduct Range300+ productsFine art prints onlyPrint QualityCommercial grade (good)Gallery-level GicléeCustom BrandingYesNoIntegrationsShopify, Etsy, Amazon & moreShopify onlyFulfillment CentersUS, EU, Mexico, CanadaUK onlyIdeal ForEcommerce sellers and brandsProfessional artists and studios
Go to the top
Product Range: Printful Offers More Variety
When it comes to product options, the two platforms are designed for very different business models.
Printful
Printful is built for ecommerce sellers who want to offer a wide range of custom products.
You can add everything from apparel to home decor, without managing inventory or logistics. This flexibility makes it easy to test different product categories.
Available product types:
T-shirts, hoodies, and hats
Posters and framed prints
Mugs, tumblers, and water bottles
Backpacks and bags
Pillows, blankets, and wall art
Stickers, journals, and phone cases
With more than 300 SKUs available and new ones added regularly, Printful is better suited for sellers who want variety and seasonal product drops.
CreativeHub
CreativeHub offers one thing — high-end art prints.
The platform doesn’t support apparel or accessories, and you won’t find print-on-demand merchandise outside of wall art.
That said, their offering is specifically tailored to professional artists and galleries, so the focus is on quality rather than variety.
CreativeHub product types:
Giclée prints (various sizes and paper types)
Framed and unframed options
Museum-grade materials
If your business revolves solely around fine art reproduction, CreativeHub keeps things simple and focused.
Go to the top
Print Quality: CreativeHub Wins for Professional Art Prints
If quality is your top concern — especially for art — there’s a noticeable difference between the two platforms.
CreativeHub
CreativeHub uses museum-quality Giclée printing, which offers exceptional image fidelity, archival inks, and premium paper types.
This is the same standard used by professional galleries and collectors.
Print quality features:
12-color inkjet system for rich color depth
Archival Hahnemühle and Fuji papers
Accurate color reproduction for fine detail
Acid-free, fade-resistant materials
The results are ideal for limited-edition art, high-resolution photography, and gallery sales.
Printful
Printful offers solid quality, especially for apparel and general merchandise.
For posters and prints, they also offer Giclée printing, but the materials and production process don’t match CreativeHub’s gallery-grade standards.
What you can expect:
DTG (direct-to-garment) for clothing
Giclée available for art posters
Sublimation for all-over print items
Good, but not fine-art level, detail and color accuracy
Unless you’re printing large-scale, collectible art prints, Printful’s quality will be more than adequate for most ecommerce uses.
Go to the top
Pricing and Margins: Printful Offers More Flexibility
Let’s compare pricing — including base costs, profit margins, and control over your pricing strategy.
PlatformExample ProductBase Cost (USD)Shipping (US)Price ControlCreativeHubA3 Giclée Print$19.00–$22.00$10–15Fixed markup onlyPrintfulPoster (12″x18″)$7.95–$9.95$4.00–$6.00Full controlPrintfulUnisex T-shirt$9.25–$12.00$4.00–$5.50Full control
CreativeHub
CreativeHub requires you to work with a fixed base price.
You choose your markup (e.g. 50%, 100%, etc.), but you can’t set your own base retail price — which limits your ability to compete on pricing or run promotions.
This setup works for art prints with high perceived value, but it leaves little flexibility in your pricing strategy.
Printful
Printful lets you set your retail price on every product.
You control the final price, discounts, and margin levels. That flexibility is key when you’re scaling with ads or testing pricing strategies.
Because Printful’s base prices are generally lower than CreativeHub’s, and because it offers a broader range of low-cost items, it’s easier to create a profitable catalog — especially for entry-level products like t-shirts or mugs.
Go to the top
Branding and Packaging: Printful Is the Clear Winner
One of the most overlooked aspects of ecommerce is branding at the delivery stage — and here’s where Printful has a major advantage.
Printful
Printful offers custom branding features that help your store look professional and stay consistent across every customer touchpoint.
Branding options include:
Custom pack-ins (thank-you cards, promo flyers)
Inside and outside shirt labels
Custom packing slips with your logo
Branded packaging (in some fulfillment centers)
This allows you to create a memorable unboxing experience that builds brand loyalty and helps your store stand out.
CreativeHub
CreativeHub does not offer any branding options.
Orders are shipped in plain packaging with no reference to your brand or store.
This is fine for high-end art customers who may not expect branded packaging, but it’s a missed opportunity if you’re looking to build a recognizable business.
Go to the top
Store Integrations: Printful Supports More Platforms
If you’re selling across multiple channels, integration options are essential for automation and scalability.
CreativeHub
CreativeHub integrates only with Shopify.
There’s no direct support for other ecommerce platforms like WooCommerce, Wix, Etsy, or Amazon.
While the Shopify integration is functional, it’s limited in customization and doesn’t offer a lot of backend automation beyond basic product syncing and order forwarding.
Printful
Printful integrates with over 20 ecommerce platforms, marketplaces, and website builders.
Available integrations:
Shopify
WooCommerce
Etsy
Amazon
eBay
Squarespace
Wix
BigCommerce
It also has API access and order routing features, making it a better long-term choice for multichannel sellers and growing businesses.
Go to the top
Fulfillment and Shipping: Printful Has Global Reach
Where your orders are fulfilled matters — especially for delivery speed and shipping costs.
CreativeHub
All CreativeHub orders are fulfilled from the United Kingdom.
If most of your customers are in the UK or EU, this works well. But for US-based buyers, delivery times are longer and costs are higher.
Shipping details:
Fulfilled in London
7–14 business days for US orders
Shipping starts at ~$10 to the US
No express shipping options
For North American businesses, this can be a disadvantage — especially if you rely on quick delivery or Amazon-like speed.
Printful
Printful operates fulfillment centers in the US, Canada, Mexico, Latvia, and Spain, allowing faster delivery and lower shipping rates in most major markets.
Shipping advantages:
Local fulfillment = faster delivery
3–7 business days average in the US
International fulfillment centers reduce customs delays
Express options available in most regions
If customer experience and shipping time are high priorities, Printful offers more reliability and scalability.
Go to the top
Ease of Use and Support: Printful Is More User-Friendly
Finally, let’s talk about usability and support — two factors that can make a huge difference when you’re just getting started.
CreativeHub
CreativeHub’s interface is basic and focused solely on uploading and managing art prints.
The platform lacks onboarding tools, has limited documentation, and support is email-only, with 24–72 hour response times.
Pros:
Simple interface for art uploads
Good enough for low-volume shops
Cons:
No live chat or phone support
No built-in analytics or sales tracking
Not ideal for fast-moving ecommerce businesses
Printful
Printful is built for ecommerce sellers.
It includes guided onboarding, live chat, detailed tutorials, mockup generators, and sales analytics. The backend is much more robust.
Support features:
24/7 live chat
Email support
Extensive help center and tutorials
Guided setup flows
Printful is easier to learn, faster to troubleshoot, and more equipped for ecommerce sellers who want to grow.
Go to the top
Final Verdict: Which Should You Use?
Use CaseBest PlatformSelling fine art prints onlyCreativeHubSelling multiple product typesPrintfulBuilding a custom-branded storePrintfulSelling mostly in the USPrintfulSelling only in the UK or EUCreativeHubRunning a multichannel ecommerce businessPrintfulFocused on gallery-grade print qualityCreativeHub
My Recommendation
If you’re a professional artist looking to sell gallery-grade prints in the UK or EU, CreativeHub is a focused, high-quality option.
But if you want to grow a broader ecommerce business, sell worldwide, and build a branded store with hundreds of product options, Printful is the better overall platform.
It offers more control, more integrations, better branding, faster fulfillment, and a more flexible pricing model — making it the clear choice for most ecommerce sellers.
The post CreativeHub vs Printful: My Verdict for 2025 appeared first on Ecommerce-Platforms.com.
Original Source: https://webdesignerdepot.com/confessions-of-a-web-design-generalist-a-k-a-the-person-who-does-literally-everything/
Web design’s real MVPs aren’t specialists—they’re the generalists quietly doing everything. These multitasking heroes hold the internet together with duct tape and Google searches. This is your gloriously chaotic love letter to the people who do it all.
Original Source: https://webdesignerdepot.com/what-is-web-design-in-2025/
Web design in 2025 isn’t about pushing pixels—it’s about creating living, breathing digital spaces that adapt, empathize, and evolve. AI builds the bones, but human designers still shape the soul. It’s not about trends anymore—it’s about trust, truth, and radical digital hospitality.
Original Source: https://www.sitepoint.com/problems-and-solutions-with-fast-api-servers/?utm_source=rss
Build robust FastAPI services by tackling the top problems: messy project layout, anti‑patterns like endpoint‑to‑endpoint calls, and memory leaks from multiple workers.
Continue reading
Common Problems and Solutions When Building FastAPI Servers
on SitePoint.