How AI Technology Will Transform Design

Original Source:

AI-generated art is everywhere on the web. If you are an active Instagram, Twitter, or Pinterest user, you likely saw interesting artworks created using text-based tools like DALLE, Midjourney, or Stable Diffusion. The magic of these tools is that to generate images, all you need to do is to provide a string of text that describes what the image is all about. Many AI-generated works look stunning, but it’s only the beginning. In the foreseeable future, AI tools will be so intuitive that everyone can express their ideas. The rise of tools that have AI at their core makes design practitioners wonder if AI will replace designers.

In this article, we will overview the current state of design, answer common questions designers have about AI tools and share practical tips on how designers can make the most of using AI tools.

Design Tools Learning Curve And Creativity

Mastering any skill takes time, and design is no exception. Designers have a lot of great tools in their arsenal, but the process of honing design talent takes years. You need to invest years of your life to get to the point when you can create decent artwork.

Human-made design: glass reflection CGI. A few seconds of rendering was 87 hours on 5 RTX. (Image by Gleb Kuznetsov)

No matter how creative you are, you must spend time creating something using your hand. Most of the time, it’s impossible to go from idea to solution in a few minutes. As a result, sometimes it feels like design is 95% craft and only 5% art.

Much energy goes into the visualization of ideas, and it can be very frustrating to learn that your idea doesn’t resonate with the audience. Once you publish your work, you might learn that it’s not something your audience wants. An unsuccessful design pitch leads to a situation when your work goes straight to the garbage bin.

But in the near future, you will be able to use shortcuts and go from your idea to the final work in a minute rather than hours or days. You will be able to avoid the tedious process of physically making art and instead become a visioner who tells the computer what you want to build and lets the computer do the work for you. And you can experience the power of AI tools even today. Use Dalle.2 by OpenAI, Midjourney, or Stable Diffusion.

Let’s answer a few popular questions that designers have regarding AI.

Can I Take Credit For Artworks Created By AI?

The answer is yes, you can, but you shouldn’t. Many AI artwork generation tools available on the market don’t give designers much freedom to control the process of artwork creation. As a designer, you explain your intention to the AI system through plain words and let the tool do its magic. You have limited or no information on how the tool works.

Because modern AI tools don’t give you much freedom to impact the design direction, the final result misses the human touch. Right now, you cannot convey a lot of personality in works generated by AI tools. At the same time, it doesn’t mean this will be true in the future. We will likely see the tools that give designers more control over the process of creating visual assets.

Will AI Take My Job?

Many professional artists panic because they see how good artificial intelligence has become at creating artwork. AI-generated art fills the market and takes potential clients. Instead of hiring a human digital artist, many companies ‘hire’ AI to do the job because it can do design work for a fraction of the cost. This trend not only takes jobs but also lowers the market value of the art — the artworks become less valuable because people see how easy it is to generate artwork using AI.

What happens right now is a predictable situation. It’s just how business works. If a business can save money by following a more effective approach, it will do it. During the industrial revolution of the 19th century, some English textile workers intentionally destroyed textile machines because they were afraid that machines would replace them. Of course, machines replaced some of the roles (typically, roles where heavy lifting or monotonous work was required), but they didn’t replace humans. The same is true for AI tools. AI won’t completely replace human ingenuity; it will complement human potential.

The true power of AI is not about replacing humans but instead giving them a massive boost in productivity.

If you think about the primary reason why people invented new tools in the first place, it becomes evident that work efficiency was the number one reason — the same works for AI. AI will help us work more efficiently.

The quality of your ideas and your ability to understand user problems and create solutions that help people is critically important at any age of product design, including the age of AI design.

Will AI Tools Lead Us To Generic Design?

When designers use the same tools and data inputs, they could easily end up making a homogenized design that looks generic.

But the problem of homogenized design is not new. Dribbblisation of design was a massive topic in the design field for a few years. Many people in the industry worry about the situation when a vast majority of the product design work on Dribbble looks the same (the same styles are applied).

Will the problem become worse when AI tools are popularized? The answer is no. If you look closer at the artists who publish their work at Dribbble, you will notice that there aren’t many artists who set trends. Once a new trend emerges and it resonates with the audience, many designers start to follow it and designs that look trendy.

“Out in the sun, some painters are lined up. The first is copying nature; the second is copying the first; the third is copying the second.”
— Paul Gauguin

AI tools won’t replace all designers anytime soon because imagination and creativity will still be the powerful properties of the artist’s mind. Until AI technology becomes sophisticated enough to do creative thinking, we don’t have to worry about creating AI trends. The point is, soon, it will be possible to curate the data you will provide as an input to the system, and the AI system will learn from you, so the results will include a lot of your personality.

Can We Face Legal Troubles Using AI Tools?

Early in 2023, a group of artists filed a class-action lawsuit against Midjourney and Stability AI, claiming copyright infringement. Both Midjourney and Stability AI were trained using billions of internet images, and this suit alleges that the companies behind those tools “violated the rights of millions of artists” who created the original images. Whether or not AI art tools violate copyright law can be challenging to determine because the database used for training is massing (billions of images). But one thing is for sure — the AI tools create new images based on the knowledge they learned due to the training.

Can designers face legal troubles using AI tools in the future? So far, there is no single correct answer to this question, but the world is quickly embracing AI art (i.e., stock photo banks will start selling AI-generated stock imagery), and we will likely have more clear rules on how to use AI-generated images in the future.

The New Chapter In Design: Co-creation With AI

When Steve Jobs explained the power of computers, he said,

“What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds.”
— Steve Jobs

It’s possible to rephrase this quote in the context of AI, saying that AI is a bicycle for our creativity — our ability to create something new. Creativity is based on life experiences and ideas that creators have. AI cannot replace humans because it uses the work that humans create as an input to produce new designs. But AI can boost creativity greatly because it becomes a sort of ‘second brain’ that works with a creator and provides new inputs.

Of course, modern AI tools don’t give us much freedom to tweak the AI engine, but they still give us a lot of power. They can provide us with ideas we didn’t think of. It makes AI an excellent tool for discovery and exploration.

Here are just a few directions of how humans and machines can work together in the future:

Conduct Visual Exploration

AI tools capture the collective experience of millions of images from photo banks and give creators a unique opportunity to quickly explore the desired direction without spending too much energy. AI becomes your creative assistant during the process of visual exploration. You prompt the system with various directions you want to pursue and let the system generate various outcomes for you. You evaluate each direction and choose the best to pursue. The process of co-creation can be iterative. For example, once you see a particular design direction, you can tell the system to dive into it to explore it.

There are two ways you can approach visual exploration, either by following text-to-image or image-to-image scenarios.

In an image-to-text scenario, you provide a prompt and tweak some settings to produce an image. Let’s discuss the most important properties of this scenario:

A prompt is a text string that we submit to the system so that it can create an image for you. Generally, the more specific details you provide, the better results the system will generate for you. You can use resources like Lexica to find a relevant prompt.
Think of steps as iterations of the image creation process. During the first steps, the image looks very noisy, and many elements in the image are blurry. The system refines it with every iteration by altering the visual details of the image. If you use Stable Diffusion, set steps to 60 or more.

You can use the Seed number to create a close copy of a specific picture. For example, if you want to generate a copy of the image you saw on Lexica, you need to specify the prompt and seed number of this image.

In the image-to-image (img2img) scenario, AI will use your image as a source and produce variations of the image based on it. For example, here is how we can use a famous painting, Under the Wave off Kanagawa, as a source for Stable Diffusion.

We can play with Image Strength by setting it close to 0 so that AI can have more freedom in the way it can interpret the image. As you can see below, the image that the system generated for us has only a few visual attributes of the original image.

Or set Image Strength up to 95% so that AI can only create a slightly different version of the original image.

Our experiment clearly proves that AI tools have an opportunity to replace mood boards. You no longer need to create mood boards (at least do it manually using tools like Pinterest) but rather tell the system to find ideas you want to explore.

Create A Complete Design For Your Product

AI can be an excellent tool to implement ideas quickly. Today we have a long and painful product design process. Going from idea to implementation takes weeks. But with AI, it can take minutes. You can create a storyboard with your product, specify the context of use for your future product, and let AI design a product.

Providing these details is important because AI should understand the nature of the problem you’re trying to solve with this design. For example, below are the visuals that you can create right now using a tool called Midjourney. All you need to do is to specify the text prompt “mobile app UI design, hotel booking, Dribbble, Behance –v 4 –q 2”.

I think that part “mobile app UI design, hotel booking, Dribbble, Behance” is self-explanatory. But you might wonder what –v and –q means.

–v means a version of the Midjourney.
On November 10, 2022, the alpha iteration of version 4 was released to users.
–q means quality.
This setting specifies how much rendering quality time you want to spend. The default number is 1. Creating the image in higher values takes more time and costs more.

It’s important to mention a couple of common issues that images generated by Midjourney have:

Gibberish texts
You likely noticed that the text on mobile app screens in the above example is not English.
Extra fingers
If you generate an image of a person, you will likely see extra fingers on their hands and legs.

In the foreseeable future, a design created by AI will automatically inherit all industry best practices, freeing designers from time-consuming activities like UI design audits. AI tools will significantly speed up the user research and design exploration phase because the tools analyze massive amounts of data and can easily provide relevant details for a particular product (i.e., create a user persona, draft a user journey, and so on). As a result, it will be possible to develop new products right during brainstorming sessions, so designers are no longer limited to low-fidelity wireframes or paper sketches. The product team members will be able to see how the product will look and work right during the session.

Create Virtual Worlds And Virtual People In It

No doubt that the metaverse will be the next big thing. It will be the most sophisticated digital platform humans have ever created, and content production will be an integral part of the platform design. Designers will have to find ways to speed up the creation of virtual environments and activities in them. At first, designers will likely try to recreate real-world places in the virtual world, but after that, they will rely on AI to do the rest. The role of designers in the metaverse will be more like a director (a person who will tailor the results) rather than a craftsman who does it with their hand. Imagine that you can create large virtual areas such as cities and get a sense of the scale of the city by experiencing it.

It’s Time To Open A New Chapter In Design

AI-powered design solutions have an opportunity to become much more than just tools designers use to create assets. They have the chance to become a natural extension of the team. I believe that the true power of AI tools will shine when tools will learn from a creator and will be able to reflect the creator’s personality in the final design. Next-gen AI will learn both about you and from you and create works that functionality and aesthetics meet your needs and taste. As a result, the output the tools will produce will have a more authentic human fingerprint.

The future of design is bright because technology will allow more people to express their creativity and make our world more interesting and richer.

Further Reading On SmashingMag

“The Future Of Design: Human-Powered Or AI-Driven?”, Keima Kai
“How AI Is Helping Solve Climate Change”, Nicholas Farmen
“Building A Simple AI Chatbot With Web Speech API And Node.js”, Tomomi Imura
“Algorithm-Driven Design: How Artificial Intelligence Is Changing Design”, Yury Vetrov

Double Image Hover Effects with Clip-Path Animations

Original Source:

Some ideas for hover effects using clip-path animations where the same image gets revealed in a creative way.

The best dock for MacBook Pro in 2023

Original Source:

Out of ports? Find the best dock for your MacBook Pro to link up all the peripherals!

The Difference Between the :where() and :is() CSS Selectors

Original Source:

The CSS selectors :where() and :is() are two pseudo-classes that allow you to select elements based on their relationship with other elements. Although they sound similar, they are different in terms of functionality and syntax.

Your Web Designer Toolbox
Unlimited Downloads: 500,000+ Web Templates, Icon Sets, Themes & Design Assets

The :where() pseudo-class was introduced as part of the CSS Selectors Level 4 specification and allows you to select elements based on the presence of other elements that match a specific condition. In other words, you can select elements based on the relationship between elements in a DOM tree. For example, you can use the :where() selector to select a list item li only if it is the first child of an unordered list ul:

li:where(:first-child of ul) {
background-color: yellow;

On the other hand, the :is() pseudo-class is part of the CSS Selectors Level 3 specification and allows you to select an element if it is one of several different selectors. It is similar to the logical OR operator, in CSS. For example, you can use the :is() selector to select a p element if it is either the first child of its parent or has a class of highlight:

p:is(:first-child, .highlight) {
background-color: yellow;

It’s important to note that the :where() selector has better browser support than the :is() selector, and that the :is() selector should not be used in conjunction with the :not() pseudo-class.

Quick Tip: How to Handle Exceptions in PHP

Original Source:

How to Handle Exceptions in PHP

Learn the basics of exceptions in PHP, how to use throw, try and catch effectively, and how to write your own custom exceptions.

Continue reading
Quick Tip: How to Handle Exceptions in PHP
on SitePoint.

Scandinavian Vs Nordic Design

Original Source:

I’ve always been drawn to the clean lines, muted colors, and minimalist aesthetic of Scandinavian and Nordic styles. Recently, I decided to redecorate my apartment with a mix of both, thinking that they were pretty much interchangeable. But as I started to curate furniture and decor, I realized that I wasn’t quite sure of the …

Read more

5 Plagiarism Detection Tools to Tell if Content is Written by ChatGPT

Original Source:

AI is achieving new feats each day, and ChatGTP is its recent one. OpenAI’s ChatGTP has taken over the Internet by storm. From providing answers to just about any question to doing college assignments and homework, it can do a lot.

However, this has also created challenges around AI-assisted plagiarism for the academic community and content creators. So in this post, we are giving a list of online plagiarism detection tools to catch bot-generated content. Take a look to see which one’s the best to help you out in detecting AI-assisted plagiarism.

1. GPTZero

GPTZero is an online app that can detect AI-assisted plagiarism in any text. Developed by Edward Tien, it is a minimal yet powerful app that can tell if a piece is written by AI bots like ChatGPT or a human.

GPTZero tests the text entered into the app based on perplexity or randomness in a sentence and burstiness that measures overall randomness in the entire text. Meaning that the text written by humans will have more perplexity and overall burstiness.

The app is basically aimed at teachers who might want to know if their students are writing assignments with the help of AI bots.

Moreover, the app has been tested against human and GPT-generated text and has reaped quite accurate results.


Next on the list is GLTR (Giant Language Model Test Room). This is an online tool based on an algorithm that can detect possible plagiarism resulting from AI-generated text written by a bot like ChatGPT.

The algorithm analyzes the text based on a huge database of predicted words and highlights the words in the text that are among the top 10 (green), top 100 (yellow), or top 1000 (red) predicted words.

So, the more predicted words there are in a text, the more likely it is that the text is bot-generated rather than human-generated.

GLTR is a joint effort of MIT-IBM Watson AI Lab and Harvard Natural Language Processing Group. However, the algorithm works better for detecting GPT-2-generated text rather than the more recent ChatGPT-assisted text.

3. GPT-2 Output Detector
GPT-2 Output DetectorGPT-2 Output Detector

If you want to detect AI-assisted plagiarism in a text that you suspect to be generated by ChatGTP’s older version GPT-2, then this tool can be really helpful.

GPT-2 Output Detector can highlight if the text is human-generated or bot-generated (specifically through GTP-2) with a good accuracy rate. However, the tool highlights that the results will start getting better after 50 tokens.

4.’s AI Content Detector's AI Content's AI Content Detector

Unlike other tools in the list, this AI Content Detector is actually a part of an AI-powered writing platform that also helps you detect AI-generated content.

You can either paste the content that you want to check or add the link of the page that you think has content that might be generated from an AI bot. It’ll take some time to analyze and give you a detection score as the percentage of human-generated content in a text.

There’s, however, a limit of 1500 words that you can run through the AI Content Detector at one time.

5. Crossplag’s AI Content Detector
Crossplag's AI Content DetectorCrossplag's AI Content Detector

Crossplag is an intuitive AI content detector that anyone from the field of education, publishing, or content creation can use. You can paste in any kind of text in it, and the tool will provide you information if it’s human-written, bit-generated or mixed.

The tool can detect content in hundreds of languages and curates a resource of billions of web articles for comparison. Moreover, Crossplag upholds a strict privacy policy and ensures no data leaks.

To use Crossplag, you first need to sing up of a free account and then you can paste content to be checked. However, for the free account, the word limit is 1000 per entry. For more features, you can subscribe to a paid package for an individual or educational institute.

The post 5 Plagiarism Detection Tools to Tell if Content is Written by ChatGPT appeared first on Hongkiat.

How to download After Effects for free or as part of Adobe Creative Cloud

Original Source:

All you need to know about how to download After Effects.

OP-1 Field brought to life: A stunning 3D design showcase by Hugo Barbosa in Blender

Original Source:

OP-1 Field brought to life: A stunning 3D design showcase by Hugo Barbosa in Blender
OP-1 Field brought to life: A stunning 3D design showcase by Hugo Barbosa in Blender


Hugo Barbosa’s 3D work created in Blender showcasing the OP-1 Field synthesizer, sampler, and sequencer is a stunning display of modeling, materials, motion, and render. The Stockholm-based company Teenage Engineering designed and manufactured the OP-1 Field, and Hugo Barbosa’s work brings it to life in a way that is both realistic and imaginative.

One of the most impressive aspects of Barbosa’s work is his attention to detail in the modeling of the OP-1 Field. Every button, knob, and dial on the device is faithfully recreated in 3D, down to the smallest details. This level of precision and accuracy is essential for creating a believable representation of the OP-1 Field, and Barbosa has nailed it.

In addition to the modeling, Barbosa’s use of materials is equally impressive. The OP-1 Field’s casing is made of plastic, and Barbosa has captured the texture and finish of the material perfectly. The buttons and dials have a metallic sheen that reflects light in a realistic way, and the overall effect is one of a high-quality, well-made piece of equipment.

The motion of the OP-1 Field in Barbosa’s work is smooth and fluid. The animation of the device’s screen is particularly impressive, with Barbosa capturing the fluidity of the graphical interface in a way that is both mesmerizing and engaging.

Finally, the render of the OP-1 Field in Barbosa’s work is simply stunning. The lighting and shadows are expertly handled, with the device casting realistic shadows on the surface it sits on. The reflections on the OP-1 Field’s plastic casing are perfectly captured, with the surrounding environment reflected in a way that feels both realistic and otherworldly.

In conclusion, Hugo Barbosa’s 3D work showcasing the OP-1 Field synthesizer, sampler, and sequencer is a masterful display of modeling, materials, motion, and render. The attention to detail and the skillful execution of each element make for a truly immersive and impressive experience. Fans of the OP-1 Field, as well as those with an appreciation for 3D design, should not miss this incredible work.

3D blender CGI design modeling Modeling3D product product design  Render visualization3D blender CGI design modeling Modeling3D product product design  Render visualization

3D blender CGI design modeling Modeling3D product product design  Render visualization

3D blender CGI design modeling Modeling3D product product design  Render visualization

3D blender CGI design modeling Modeling3D product product design  Render visualization

3D blender CGI design modeling Modeling3D product product design  Render visualization

3D blender CGI design modeling Modeling3D product product design  Render visualization

3D blender CGI design modeling Modeling3D product product design  Render visualization

For more information make sure to check out Hugo on: