Let's calm down about The Simpsons season finale
Original Source: https://www.creativebloq.com/entertainment/movies-tv-shows/lets-calm-down-about-the-simpsons-season-finale
The latest major character death may be different from the others.
Original Source: https://www.creativebloq.com/entertainment/movies-tv-shows/lets-calm-down-about-the-simpsons-season-finale
The latest major character death may be different from the others.
Original Source: https://smashingmagazine.com/2025/06/can-good-ux-protect-older-users-digital-scams/
A few years ago, my mum, who is in her 80s and not tech-savvy, almost got scammed. She received an email from what appeared to be her bank. It looked convincing, with a professional logo, clean formatting, and no obvious typos. The message said there was a suspicious charge on her account and presented a link asking her to “verify immediately.”
She wasn’t sure what to do. So she called me.
That hesitation saved her. The email was fake, and if she’d clicked on the link, she would’ve landed on a counterfeit login page, handing over her password details without knowing it.
That incident shook me. I design digital experiences for a living. And yet, someone I love almost got caught simply because a bad actor knew how to design well. That raised a question I haven’t stopped thinking about since: Can good UX protect people from online scams?
Quite apart from this incident, I see my Mum struggle with most apps on her phone. For example, navigating around her WhatsApp and YouTube apps seems to be very awkward for her. She is not used to accessing the standard app navigation at the bottom of the screen. What’s “intuitive” for many users is simply not understood by older, non-tech users.
Brief Overview Of How Scams Are Evolving Online
Online scams are becoming increasingly sophisticated, leveraging advanced technologies like artificial intelligence and deepfake videos to create more convincing yet fraudulent content. Scammers are also exploiting new digital platforms, including social media and messaging apps, to reach victims more directly and personally.
Phishing schemes have become more targeted, often using personal information taken from social media to craft customised attacks. Additionally, scammers are using crypto schemes and fake investment opportunities to lure those seeking quick financial gains, making online scams more convincing, diverse, and harder to detect.
The Rise In Fraud Targeting Older, Less Tech-savvy Users
In 2021, there were more than 90,000 older victims of fraud, according to the FBI. These cases resulted in US$1.7 billion in losses, a 74% increase compared with 2020. Even so, that may be a significant undercount since embarrassment or lack of awareness keeps some victims from reporting.
In Australia, the ACCC’s 2023 “Targeting Scams” report revealed that Australians aged 65 and over were the only age group to experience an increase in scam losses compared to the previous year. Their losses rose by 13.3% to $120 million, often following contact with scammers on social media platforms.
In the UK, nearly three in five (61%) people aged over 65 have been the target of fraud or a scam. On average, older people who have been scammed have lost nearly £4,000 each.
According to global consumer protection agencies, people over 60 are more likely to lose money to online scams than any other group. That’s a glaring sign: we need to rethink how we’re designing experiences for them.
Older users are disproportionately targeted by scammers for several reasons:
They’re perceived as having more savings or assets.
They’re less likely to be digital natives, so they may not spot the red flags others do.
They tend to trust authority figures and brands, especially when messages appear “official.”
Scammers exploit trust. They impersonate banks, government agencies, health providers, and even family members. The one that scares me the most is the ability to use AI to mimic a loved one’s voice — anyone can be tricked by this.
Cognitive Load And Decision Fatigue In Older Users
Imagine navigating a confusing mobile app after a long day. Now imagine you’re in your 70s or 80s; your eyesight isn’t as sharp, your finger tapping isn’t as accurate, and every new screen feels like a puzzle.
As people age, they may experience slower processing speeds, reduced working memory, and lower tolerance for complexity. That means:
Multistep processes are harder to follow.
Unexpected changes in layout or behaviour can cause anxiety.
Vague language increases confusion.
Decision fatigue hits harder, too. If a user has already made five choices on an app, they may click the 6th button without fully understanding what it does, especially if it seems to be part of the flow.
Scammers rely on these factors. However, good UX can help to reduce it.
The Digital Literacy Gap And Common Pain Points
There’s a big difference between someone who grew up with the internet and someone who started using it in their 60s. Older users often struggle with:
Recognising safe vs. suspicious links;
Differentiating between ads and actual content;
Knowing how to verify sources;
Understanding terms like “multi-factor authentication” or “phishing”.
They may also be more likely to blame themselves when something goes wrong, leading to underreporting and repeat victimization.
Design can help to bridge some of that gap. But only if we build with their experience in mind.
The Role UX Designers Can Play In Preventing Harm
As UX designers, we focus on making things easy, intuitive, and accessible. But we can also shape how people understand risk.
Every choice, from wording to layout to colour, can affect how users interpret safety cues. When we design for the right cues, we help users avoid mistakes. When we get them wrong or ignore them altogether, we leave people vulnerable.
The good news? We have tools. We have influence. And in a world where digital scams are rising, we can use both to design for protection, not just productivity.
UX As The First Line Of Defence
The list below describes some UX design improvements that we can consider as designers:
1. Clear, Simple Design As A Defence Mechanism
Simple interfaces reduce user errors and scam risks.
Use linear flows, fewer input fields, and clear, consistent instructions.
Helps users feel confident and spot unusual activity.
2. Make Security Cues Obvious And Consistent
Users rely on visible indicators: padlocks, HTTPS, and verification badges.
Provide clear warnings for risky actions and unambiguous button labels.
3. Prioritize Clarity In Language
Use plain, direct language for critical actions (e.g., “Confirm $400 transfer”).
Avoid vague CTAs like “Continue” or playful labels like “Let’s go!”
Clear language reduces uncertainty, especially for older users.
4. Focus On Accessibility And Readability
Use minimum 16px fonts and high-contrast colour schemes.
Provide clear spacing and headings to improve scanning.
Accessibility benefits everyone, not just older users.
5. Use Friction To Protect, Not Hinder
Intentional friction (e.g., verification steps or warnings) can prevent mistakes.
Thoughtfully applied, it enhances safety without frustrating users.
6. Embed Contextual Education
Include just-in-time tips, tooltips, and passive alerts.
Help users understand risks within the flow, not after the fact.
What Can’t UX Fix?
Let’s be realistic: UX isn’t magic. We can’t stop phishing emails from landing in someone’s inbox. We can’t rewrite bad policies, and we can’t always prevent users from clicking on a well-disguised trap.
I personally think that even good UX may be limited in helping people like my mother, who will never be tech-savvy. To help those like her, ultimately, additional elements like support contact numbers, face-to-face courses on how to stay safe on your phone, and, of course, help from family members as required. These are all about human contact touch points, which can never be replaced by any kind of digital or AI support that may be available.
What we can do as designers is build systems that make hesitation feel natural. We can provide visual clarity, reduce ambiguity, and inject small moments of friction that nudge users to double-check before proceeding, especially in financial and banking apps and websites.
That hesitation might be the safeguard we need.
Other Key Tips To Help Seniors Avoid Online Scams
1. Be Skeptical Of Unsolicited Communications
Scammers often pose as trusted entities like banks, government agencies, or tech support to trick individuals into revealing personal information. Avoid clicking on links or downloading attachments from unknown sources, and never share personal details like your Medicare number, passwords, or banking information unless you’ve verified the request independently.
2. Use Strong, Unique Passwords And Enable Two-Factor Authentication
Create complex passwords that combine letters, numbers, and symbols, and avoid reusing passwords across different accounts. Whenever possible, enable two-factor authentication (2FA) to add an extra layer of security to your online accounts.
3. Stay Informed About Common Scams
Educate yourself on prevalent scams targeting seniors, such as phishing emails, romance scams, tech support fraud, and investment schemes. Regularly consult trusted resources like the NCOA and Age UK for updates on new scam tactics and prevention strategies.
4. Verify Before You Act
If you receive a request for money or personal information, especially if it’s urgent, take a moment to verify its legitimacy. Contact the organization directly using official contact information, not the details provided in the suspicious message. Be particularly cautious with unexpected requests from supposed family members or friends.
5. Report Suspected Scams Promptly
If you believe you’ve encountered a scam, report it to the appropriate authorities. Reporting helps protect others and contributes to broader efforts to combat fraud.
For more comprehensive information and resources, consider exploring the following:
National Council on Aging: 22 Tips for Seniors to Avoid Scams
Age UK: Avoiding Scams Information Guide
eSafety Commissioner: Online Scams for Seniors
Examples Of Good Alert/Warning UX In Banking Platforms
I recall my mother not recognising a transaction in her banking app, and she thought that money was being taken from her account. It turns out that it was a legitimate transaction made in a local cafe, but the head office was located in a suburb she was not familiar with, which caused her to think it was fraudulent.
This kind of scenario could easily be addressed with a feature I have seen in the ING banking app (International Netherlands Group). You tap on the transaction to view more information about your transaction.
ING bank: You can now select a transaction to get more information on the business.
ING Banking App: click on the transaction to view more details. (Source: ING Help Hub)
Banking apps like NAB (National Australia Bank) now interrupt suspicious transfers with messages like, “Have you spoken to this person on the phone? Scammers often pose as trusted contacts.” NAB said that December was the biggest month in 2024 for abandoned payments, with customers scrapping $26 million worth of payments after receiving a payment alert.
Macquarie Bank has introduced additional prompts for bank transactions to confirm the user’s approval of all transactions.
Monzo Bank has added three security elements to reduce online fraud for banking transactions:
Verified Locations: Sending or moving large amounts of money from locations that the account holder has marked as safe. This helps block fraudsters from accessing funds if they’re not near these trusted places.
Trusted Approvers: For large transactions, a trusted contact must give the green light. This adds protection if their phone is stolen or if they want to safeguard someone who may be more vulnerable.
Secure QR Codes: Account holders can generate a special QR code and keep it stored in a safe place. They scan it when needed to unlock extra layers of security.
Email platforms like Gmail highlight spoofed addresses or impersonation attempts with yellow banners and caution icons.
These interventions are not aimed at stopping users, but they can give them one last chance to rethink their transactions. That’s powerful.
Finally, here’s an example of clear UX cues that streamline the experience and guide users through their journey with greater confidence and clarity.
Conclusion
Added security features in banking apps, like the examples above, aren’t just about preventing fraud; they’re examples of thoughtful UX design. These features are built to feel natural, not burdensome, helping users stay safe without getting overwhelmed. As UX professionals, we have a responsibility to design with protection in mind, anticipating threats and creating experiences that guide users away from risky actions. Good UX in financial products isn’t just seamless; it’s about security by design.
And in a world where digital deception is on the rise, protection is usability. Designers have the power and the responsibility to make interfaces that support safer choices, especially for older users, whose lives and life savings may depend on a single click.
Let’s stop thinking of security as a backend concern or someone else’s job. Let’s design systems that are scam-resistant, age-inclusive, and intentionally clear. And don’t forget to reach out with the additional human touch to help your older family members.
When it comes down to it, good UX isn’t just helpful — it can be life-changing.
Original Source: https://www.hongkiat.com/blog/hide-secret-files-in-images-using-steghide/
Ever wanted to hide sensitive information in plain sight? That’s exactly what steganography allows you to do. Unlike encryption, which makes data unreadable but obvious that something is hidden, steganography conceals the very existence of the secret data.
Steghide is a powerful Linux tool that lets you embed any file into an image with minimal visual changes to the original picture. This makes it perfect for securely transferring sensitive information or simply keeping private files hidden from casual observers, similar to how you might password protect folders on Mac for added security.
While there are many legitimate uses for this technology-like watermarking, protecting intellectual property, or secure communication-it’s important to use these techniques responsibly and legally.
Prerequisites
Before we begin hiding files in images, you’ll need:
A Linux system with Steghide installed (install it using sudo apt-get install steghide on Debian/Ubuntu). For Mac users, follow the installation instructions at steghide-osx. Windows users can download the Windows binary from the Steghide website.
A cover image – preferably a high-quality JPEG file with some visual complexity
A file to hide – this can be any type of file, though smaller files work better
For this tutorial, I’ll be using:
A random image from Unsplash as my cover image
A CSV file I randomly created as the file to hide
This setup mimics a realistic scenario where someone might want to securely transfer sensitive information without raising suspicion.
Step-by-Step Guide: Hiding Files
Follow these steps to hide your files in images:
1. Prepare Your Cover Image
First, you’ll need a suitable image to hide your data in. For this tutorial, I’m using a high-quality random image from Unsplash. The best cover images have:
High resolution and quality
Complex patterns or textures
JPEG format (though Steghide supports other formats too)
2. Prepare the File to Hide
Next, you need the file you want to hide. In this example, I’m using a CSV file I randomly created containing sample data.
When opened, this CSV file shows rows of data that would be valuable to protect. In a real-world scenario, this could be any sensitive information you need to transfer securely.
3. Use Steghide to Embed the File
Now for the actual hiding process. Open your terminal and use the following Steghide command:
steghide embed -cf example.jpg -ef sample_data.csv
Let’s break down this command:
steghide embed – Tells Steghide we want to hide a file
-cf example.jpg – Specifies our cover file (the image)
-ef sample_data.csv – Specifies the file we want to embed
4. Set a Secure Passphrase
After running the command, Steghide will prompt you to enter a passphrase. This password will be required later to extract the hidden file, so make sure it’s something secure that you’ll remember.
For demonstration purposes, I used “password” as my passphrase, but in real-world scenarios, you should use a strong, unique password.
Once you’ve entered and confirmed your passphrase, Steghide will process the files and create a new image with your data hidden inside. The output file will be named according to your cover file (in this case, it created “example.jpg” with the hidden data).
Verifying the Steganography
After hiding your files, you’ll want to verify the process:
1. Visual Comparison
When comparing the original image with the modified one containing our hidden data, there should be no visible differences to the naked eye. If you quickly switch between the two images, they should appear identical.
This is the beauty of steganography – the changes made to accommodate the hidden data are so subtle that they’re practically invisible without specialized analysis tools.
2. File Size Considerations
One thing to note is that the modified image will typically have a slightly larger file size than the original. This increase depends on the size of the hidden file, but Steghide is quite efficient at minimizing this difference.
For sensitive operations, be aware that file size differences could potentially tip off very observant individuals that something has been modified.
Extracting Hidden Files
To retrieve your hidden files, follow these steps:
1. Using the Steghide Extract Command
To extract the hidden file, use the following command in your terminal:
steghide extract -sf example.jpg
Breaking down this command:
steghide extract – Tells Steghide we want to extract a hidden file
-sf example.jpg – Specifies the steganographic file (the image containing hidden data)
2. Entering the Passphrase
After running the command, Steghide will prompt you for the passphrase you set earlier. Enter it correctly, and Steghide will extract the hidden file to your current directory.
3. Verifying the Extracted File
Once extraction is complete, you should find your original file (in our case, the CSV file) in your directory. Open it to verify that all the data is intact and matches the original file you embedded.
In our example, we can confirm that all the fake social security numbers and credit card numbers from our original CSV file have been perfectly preserved in the extracted file.
Security Considerations
Keep these security aspects in mind:
Importance of Strong Passphrases
The security of your hidden data relies heavily on your passphrase. A weak passphrase like “password” (used in our demo) could be easily guessed, compromising your hidden data.
For real security needs, use a strong, unique passphrase that includes a mix of uppercase and lowercase letters, numbers, and special characters.
Limitations and Best Practices
While Steghide is a powerful tool, it’s important to understand its limitations:
File size ratio – The file you’re hiding should be significantly smaller than the cover image
Format limitations – Steghide works best with JPEG, BMP, WAV, and AU files
Steganalysis tools – Advanced forensic tools can sometimes detect steganography
For maximum security:
Consider encrypting your sensitive file before hiding it
Use high-quality, complex images as your cover files
Avoid reusing the same cover image multiple times
Be mindful of metadata in your files
Conclusion
Steghide offers a fascinating and practical way to hide sensitive information within ordinary-looking image files. By following the steps in this tutorial, you can securely embed any file into an image and later extract it with the correct passphrase.
This technique provides an additional layer of security beyond encryption alone, as it conceals the very existence of the secret data. While someone might demand you decrypt an encrypted file, they won’t even know to ask about data hidden through steganography.
Remember to use this technology responsibly and legally. Steganography has legitimate uses in privacy protection, secure communication, and digital watermarking, but like any powerful tool, it should be used ethically.
The post Hiding Secret Files in Images Using Steghide appeared first on Hongkiat.
Original Source: https://tympanus.net/codrops/2025/06/26/designer-spotlight-bimo-tri/
Meet multidisciplinary designer and creative developer, Bimo Tri who crafts expressive digital experiences that merge culture, storytelling, and motion-driven design.
Original Source: https://webdesignerdepot.com/dear-loading-spinner-we-need-to-talk/
Tired of watching the loading spinner do its little dance of despair? This brutally honest breakup letter to everyone’s least favorite UI element calls out the lies, the stalling, and the silent screams behind every infinite loop. If you’ve ever stared at a spinning circle and questioned your life choices, this one’s for you.
Original Source: https://smashingmagazine.com/2025/06/decoding-svg-path-element-curve-arc-commands/
In the first part of decoding the SVG path pair, we mostly dealt with converting things from semantic tags (line, polyline, polygon) into the path command syntax, but the path element didn’t really offer us any new shape options. This will change in this article as we’re learning how to draw curves and arcs, which just refer to parts of an ellipse.
TL;DR On Previous Articles
If this is your first meeting with this series, I recommend you familiarize yourself with the basics of hand-coding SVG, as well as how the <marker> works and have a basic understanding of animate, as this guide doesn’t explain them. I also recommend knowing about the M/m command within the <path> d attribute (I wrote the aforementioned article on path line commands to help).
Note: This article will solely focus on the syntax of curve and arc commands and not offer an introduction to path as an element.
Before we get started, I want to do a quick recap of how I code SVG, which is by using JavaScript. I don’t like dealing with numbers and math, and reading SVG code that has numbers filled into every attribute makes me lose all understanding of it. By giving coordinates names and having all my math easy to parse and all written out, I have a much better time with this type of code, and I think you will, too.
As the goal of this article is about understanding path syntax and not about doing placement or how to leverage loops and other more basic things, I will not run you through the entire setup of each example. I’ll share some snippets of the code, but please note that it may be slightly adjusted from the CodePen or simplified to make the article easier to read. However, if there are specific questions about code not part of the text that’s in the CodePen demos — the comment section is open, as always.
To keep this all framework-agnostic, the code is written in vanilla JavaScript, though, in practice, TypeScript comes highly recommended when dealing with complex images.
Drawing Bézier Curves
Being able to draw lines, polygons, polylines, and compounded versions of them is all fun and nice, but path can also do more than just offer more cryptic implementations of basic semantic SVG tags.
One of those additional types is Bézier curves.
There are multiple different curve commands. And this is where the idea of points and control points comes in.
Bézier math plotting is out of scope for this article.
But, there is a visually gorgeous video by Freya Holmér called The Beauty of Bézier Curves which gets into the construction of cubic and quadratic bézier curves that features beautiful animation and the math becomes a lot easier to digest.
Luckily, SVG allows us to draw quadratic curves with one control point and cubic curves with two control points without having to do any additional math.
So, what is a control point? A control point is the position of the handle that controls the curve. It is not a point that is drawn.
I found the best way to understand these path commands is to render them like a GUI, like Affinity and Illustrator would. Then, draw the “handles” and draw a few random curves with different properties, and see how they affect the curve. Seeing that animation also really helps to see the mechanics of these commands.
This is what I’ll be using markers and animation for in the following visuals. You will notice that the markers I use are rectangles and circles, and since they are connected to lines, I can make use of marker and then save myself a lot of animation time because these additional elements are rigged to the system. (And animating a single d command instead of x and y attributes separately makes the SVG code also much shorter.)
Quadratic Bézier Curves: Q & T Commands
The Q command is used to draw quadratic béziers. It takes two arguments: the control point and the end point.
So, for a simple curve, we would start with M to move to the start point, then Q to draw the curve.
const path = M${start.x} ${start.y} Q${control.x} ${control.y} ${end.x} ${end.y};
Since we have the Control Point, the Start Point, and the End Point, it’s actually quite simple to render the singular handle path like a graphics program would.
Funny enough, you probably have never interacted with a quadratic Bézier curve like with a cubic one in most common GUIs! Most of the common programs will convert this curve to a cubic curve with two handles and control points as soon as you want to play with it.
For the drawing, I created a couple of markers, and I’m drawing the handle in red to make it stand out a bit better.
I also stroked the main path with a gradient and gave it a crosshatch pattern fill. (We looked at pattern in my first article, linearGradient is fairly similar. They’re both def elements you can refer to via id.) I like seeing the fill, but if you find it distracting, you can modify the variable for it.
I encourage you to look at the example with and without the rendering of the handle to see some of the nuance that happens around the points as the control points get closer to them.
See the Pen SVG Path Quadratic Bézier Curve Visual [forked] by Myriam.
Quadratic Béziers are the “less-bendy” ones.
These curves always remain somewhat related to “u” or “n” shapes and can’t be manipulated to be contorted. They can be squished, though.
Connected Bézier curves are called “Splines”. And there is an additional command when chaining multiple quadratic curves, which is the T command.
The T command is used to draw a curve that is connected to the previous curve, so it always has to follow a Q command (or another T command). It only takes one argument, which is the endpoint of the curve.
const path = M${p1.x} ${p1.y} Q${cP.x} ${cP.y} ${p2.x} ${p2.y} T${p3.x} ${p3.y}
The T command will actually use information about our control Point cP within the Q command.
To see how I created the following example. Notice that the inferred handles are drawn in green, while our specified controls are still rendered in red.
See the Pen SVG Path Quadratic Curve T Command [forked] by Myriam.
OK, so the top curve takes two Q commands, which means, in total, there are three control points. Using a separate control point to create the scallop makes sense, but the third control point is just a reflection of the second control point through the preceding point.
This is what the T command does. It infers control points by reflecting them through the end point of the preceding Q (or T) command. You can see how the system all links up in the animation below, where all I’ve manipulated is the position of the main points and the first control points. The inferred control points follow along.
See the Pen SVG Path Quadratic Bézier Spline T Command Visual [forked] by Myriam.
The q and t commands also exist, so they will use relative coordinates.
Before I go on, if you do want to interact with a cubic curve, SVG Path Editor allows you to edit all path commands very nicely.
Cubic Bézier Curves: C And S
Cubic Bézier curves work basically like quadratic ones, but instead of having one control point, they have two. This is probably the curve you are most familiar with.
The order is that you start with the first control point, then the second, and then the end point.
const path = M${p1.x} ${p1.y} C${cP1.x} ${cP1.y} ${cP2.x} ${cP2.y} ${p2.x} ${p2.y};
Let’s look at a visual to see it in action.
See the Pen SVG Path Cubic Bézier Curve Animation [forked] by Myriam.
Cubic Bézier curves are contortionists.
Unlike the quadratic curve, this one can curl up and form loops and take on completely different shapes than any other SVG element. It can split the filled area into two parts, while the quadratic curve can not.
Just like with the T command, a reflecting command is available for cubic curves S.
When using it, we get the first control point through the reflection, while we can define the new end control point and then the end point. Like before, this requires a spline, so at least one preceding C (or S) command.
const path = `
M ${p0.x} ${p0.y}
C ${c0.x} ${c0.y} ${c1.x} ${c1.y} ${p1.x} ${p1.y}
S ${c2.x} ${c2.y} ${p2.x} ${p2.y}
`;
I created a living visual for that as well.
See the Pen SVG Path Cubic Bézier Spline S Command Visual [forked] by Myriam.
When to use T and S:
The big advantage of using these chaining reflecting commands is if you want to draw waves or just absolutely ensure that your spline connection is smooth.
If you can’t use a reflection but want to have a nice, smooth connection, make sure your control points form a straight line. If you have a kink in the handles, your spline will get one, too.
Arcs: A Command
Finally, the last type of path command is to create arcs. Arcs are sections of circles or ellipses.
It’s my least favorite command because there are so many elements to it. But it is the secret to drawing a proper donut chart, so I have a bit of time spent with it under my belt.
Let’s look at it.
Like with any other path command, lowercase implies relative coordinates. So, just as there is an A command, there’s also an a.
So, an arc path looks like this:
const path = M${start.x} ${start.y} A${radius.x} ${radius.y} ${xAxisRotation} ${largeArcFlag} ${sweepFlag} ${end.x} ${end.y};
And what the heck are xAxisRotation, largeArcFlag, and sweepFlag supposed to be? In short:
xAxisRotation is the rotation of the underlying ellipse’s axes in degrees.
largeArcFlag is a boolean value that determines if the arc is greater than 180°.
sweepFlag is also a boolean and determines the arc direction, so does it go clockwise or counter-clockwise?
To better understand these concepts, I created this visual.
See the Pen SVG Path Arc Command Visuals [forked] by Myriam.
Radius Size
You’ll notice in that CodePen that there are ellipses drawn for each command. In the top row, they are overlapping, while in the bottom row, they are stacked up. Both rows actually use the same radius.x and radius.y values in their arc definitions, while the distance between the start and end points increases for the second row.
The reason why the stacking happens is that the radius size is only taken into consideration if the start and end points fit within the specified ellipse. That behavior surprised me, and thus, I dug into the specs and found the following information on how the arc works:
“Arbitrary numerical values are permitted for all elliptical arc parameters (other than the boolean flags), but user agents must make the following adjustments for invalid values when rendering curves or calculating their geometry:
If the endpoint (x, y) of the segment is identical to the current point (e.g., the endpoint of the previous segment), then this is equivalent to omitting the elliptical arc segment entirely.
If either rx or ry is 0, then this arc is treated as a straight line segment (a “lineto”) joining the endpoints.
If either rx or ry have negative signs, these are dropped; the absolute value is used instead.
If rx, ry and x-axis-rotation are such that there is no solution (basically, the ellipse is not big enough to reach from the current point to the new endpoint) then the ellipse is scaled up uniformly until there is exactly one solution (until the ellipse is just big enough).
See the appendix section Correction of out-of-range radii for the mathematical formula for this scaling operation.”
— 9.5.1 Out-of-range elliptical arc parameters
So, really, that stacking is just nice and graceful error-handling and not how it was intended. Because the top row is how arcs should be used.
When plugging in logical values, the underlying ellipses and the two points give us four drawing options for how we could connect the two points along an elliptical path. That’s what the boolean values are for.
xAxisRotation
Before we get to the booleans, the crosshatch pattern shows the xAxisrotation. The ellipse is rotated around its center, with the degree value being in relation to the x-direction of the SVG.
So, if you work with a circular ellipse, the rotation won’t have any effect on the arc (except if you use it in a pattern like I did there).
Sweep Flag
Notice the little arrow marker to show the arc drawing direction. If the value is 0, the arc is drawn clockwise. If the value is 1, the arc is drawn counterclockwise.
Large Arc Flag
The large Arc Flag tells the path if you want the smaller or the larger arc from the ellipse. If we have a scaled case, we get exactly 180° of our ellipse.
Arcs usually require a lot more annoying circular number-wrangling than I am happy doing (As soon as radians come to play, I tend to spiral into rabbit holes where I have to relearn too much math I happily forget.)
They are more reliant on values being related to each other for the outcome to be as expected and there’s just so much information going in.
But — and that’s a bit but — arcs are wonderfully powerful!
Conclusion
Alright, that was a lot! However, I do hope that you are starting to see how path commands can be helpful. I find them extremely useful to illustrate data.
Once you know how easy it is to set up stuff like grids, boxes, and curves, it doesn’t take many more steps to create visualizations that are a bit more unique than what the standard data visualization libraries offer.
With everything you’ve learned in this series of articles, you’re basically fully equipped to render all different types of charts — or other types of visualizations.
Like, how about visualizing the underlying cubic-bezier of something like transition-timing-function: ease; in CSS? That’s the thing I made to figure out how I could turn those transition-timing-functions into something an <animate> tag understands.
See the Pen CSS Cubic Beziers as SVG Animations & CSS Transition Comparisons [forked] by Myriam.
SVG is fun and quirky, and the path element may be the holder of the most overwhelming string of symbols you’ve ever laid eyes on during code inspection. However, if you take the time to understand the underlying logic, it all transforms into one beautifully simple and extremely powerful syntax.
I hope with this pair of path decoding articles, I managed to expose the underlying mechanics of how path plots work. If you want even more resources that don’t require you to dive through specs, try the MDN tutorial about paths. It’s short and compact, and was the main resource for me to learn all of this.
However, since I wrote my deep dive on the topic, I stumbled into the beautiful svg-tutorial.com, which does a wonderful job visualizing SVG coding as a whole but mostly features my favorite arc visual of them all in the Arc Editor. And if you have a path that you’d like properly decoded without having to store all of the information in these two articles, there’s SVG Path Visualizer, which breaks down path information super nicely.
And now: Go forth and have fun playing in the matrix.
Original Source: https://www.sitepoint.com/angular-signals-a-new-mental-model/?utm_source=rss
Discover how Angular Signals revolutionize data flow with reactive variables, not streams. Learn production gotchas, when to choose Signals over RxJS, and real-world patterns
Continue reading
Angular Signals: A New Mental Model for Reactivity, Not Just a New API
on SitePoint.
Original Source: https://www.hongkiat.com/blog/essential-cursor-editor-tips/
Cursor is a code editor designed to help you write code faster and more efficiently. It uses AI assistants that understand your code, offers smart suggestions, generates code snippets, and even helps fix bugs.
To make the most of Cursor, it’s important to use it effectively. In this article, we’ll share practical tips and tricks to boost your workflow and get the best results in this AI-powered code editor.
Ready to boost your productivity? Here are some practical ways to get the most out of Cursor.
1. Use the Cursor CLI
The cursor CLI is a command-line tool for Windows, macOS, and Linux that allows you to interact with the Cursor editor directly from your terminal. To install the CLI, you can launch the command palette with Cmd/Ctrl+P, and select the Shell Command: Install “cursor” command menu, as follows:
It works similarly to the code CLI for VSCode. It allows you, for example, to create, manage, and open projects in the Cursor editor without leaving the command line.
In addition to project management, the CLI also helps you handle extensions in Cursor. You can list installed extensions, update them, or uninstall ones you no longer need with simple commands.
Here are a few examples of how you can use the cursor CLI:
Open current directory in Cursor editor:
cursor .
Add folder to the last active window:
cursor –add site
List currently installed extensions:
cursor –list-extensions
Using the CLI is particularly useful especially if you frequently or prefer working in the Terminal, as it could help making your development process more efficient.
2. Use Context
The chat feature in Cursor allows you to interact with the AI assistant directly. You can ask questions, request code changes, and get suggestions that you can apply with a single click.
One important thing to remember is that Cursor works best when you provide the right context. The more relevant details you include, the better its responses will be.
A great way to do this is by tagging relevant files using @. This helps the AI understand your code better and give more precise suggestions.
For example, if you want to create a test for a class, you can tag the file.
This way it can understand better what the code is about and thus can also provide more accurate responses. If you’re happy with it, you can simply click on the Apply option. It also understands where to put it in the directory, as we can see below:
3. Use Image for Context in Chat
Furthermore, one of the cool things about Cursor chat is that you’re able to include image as context. You can do so by drag-n-drop the image on the chat box.
When an image is included, it can analyze it alongside the provided text, enabling it to generate more relevant and accurate code. This is particularly useful for tasks that require visual cues, such as updating user interfaces or replicating design elements from mockups.
In this example, we will use it to generate an SVG.
It’s pretty handy!
But it’s important to note that results may vary depending on the image’s complexity and the task. It can still struggle with finer details.
4. Use Custom Rules
Cursor also ships with a feature called Rules for AI.
This feature allows you to define rules for the AI to follow when suggesting or generating codes. You can define the format, naming conventions, best practices for your project, or apply rules for specific files.
This is super helpful if you’re working with a team and need everyone to follow the same coding rules, or if you just have a personal way of doing things. It can save you time, avoid unnecessary edits, and get suggestions that fit your workflow perfectly.
So, to set up the rules, you can go to the Settings > Cursor Settings > Rules. Click on the + Add new rule. Then, you will need to add the name, description, and optionally attach a file to the rule.
Now, it’s time to set up your rules.
If you’re just getting started, keep it simple. Don’t try to define every rule at once. Focus on the most important ones first. Then, test how the AI responds and refine your rules as needed to get the best results.
Here is an example of how we can describe the rule:
This will ensure the AI assistant would follow PSR-12 convention when generating PHP codes with few exceptions, and also apply specific rule to a one specific file.
5. Use Notepads
Another feature in Cursor that can make your workflow even more efficient is Notepads. By default, this feature might be hidden in the editor, but you can enable it by right-clicking on the Primary Sidebar on the right side and selecting Notepads from the menu.
Now, you can find Notepads in the Cursor sidebar. Create a new one with a clear name, add your content using plain text or markdown.
You can add for example the write down the project architecture decisions, recording development guidelines and best practices, and helping maintain consistency across your codebase.
If you frequently use certain code snippets, Notepads can act as a handy place to store reusable templates. It’s also great for keeping frequently referenced documentation, like API details, troubleshooting steps, or internal workflows.
Here is an example where we define the architecture decisions for the Frontend projects:
Now, you can refer your Notepads in Chat or in Composer (Agent) in Cursor, using @Notepads.
6. Documentation Integration
Cursor, like any AI assistant or tool, works best when it has the right context, such as relevant documentation, to guide its responses.
In Cursor, you can add and reference external documentation directly in the editor to give the AI assistant access to important resources.
By default, Cursor already includes a wide range of official documentation, covering frameworks like WordPress, Laravel, Vue, React, Angular, and many more. If the documentation you need isn’t available, you can easily add it by providing a URL. This is especially useful for including internal team documentation. As long as the content is publicly accessible, Cursor can fetch and use it.
To includes documentation as reference, you can type @docs in the chat box, and then you can search for the documentation you need.
In this example, I add the reference to the WordPress official docs and ask Cursor to create a post type.
Cursor is quite smart that it defined the post type with a class with proper name, add it in proper directory, set the private option to false and add all translatable labels with the correct text domain.
Wrapping Up
Cursor is a powerful AI assistant that can help you write code faster, and improve your workflow. In this article, we’ve explored some of the tips and tricks that can help you get the most out of Cursor. Hopefully, you’ve found them useful and can apply them to your own projects.
The post 6 Cursor AI Tips You Should Know appeared first on Hongkiat.
Original Source: https://smashingmagazine.com/2025/06/css-cascade-layers-bem-utility-classes-specificity-control/
CSS is wild, really wild. And tricky. But let’s talk specifically about specificity.
When writing CSS, it’s close to impossible that you haven’t faced the frustration of styles not applying as expected — that’s specificity. You applied a style, it worked, and later, you try to override it with a different style and… nothing, it just ignores you. Again, specificity.
Sure, there’s the option of resorting to !important flags, but like all developers before us, it’s always risky and discouraged. It’s way better to fully understand specificity than go down that route because otherwise you wind up fighting your own important styles.
Specificity 101
Lots of developers understand the concept of specificity in different ways.
The core idea of specificity is that the CSS Cascade algorithm used by browsers determines which style declaration is applied when two or more rules match the same element.
Think about it. As a project expands, so do the specificity challenges. Let’s say Developer A adds .cart-button, then maybe the button style looks good to be used on the sidebar, but with a little tweak. Then, later, Developer B adds .cart-button .sidebar, and from there, any future changes applied to .cart-button might get overridden by .cart-button .sidebar, and just like that, the specificity war begins.
I’ve written CSS long enough to witness different strategies that developers have used to manage the specificity battles that come with CSS.
/* Traditional approach */
#header .nav li a.active { color: blue; }
/* BEM approach */
.header__nav-item–active { color: blue; }
/* Utility classes approach */
.text-blue { color: blue; }
/* Cascade Layers approach */
@layer components {
.nav-link.active { color: blue; }
}
All these methods reflect different strategies on how to control or at least maintain CSS specificity:
BEM: tries to simplify specificity by being explicit.
Utility-first CSS: tries to bypass specificity by keeping it all atomic.
CSS Cascade Layers: manage specificity by organizing styles in layered groups.
We’re going to put all three side by side and look at how they handle specificity.
My Relationship With Specificity
I actually used to think that I got the whole picture of CSS specificity. Like the usual inline greater than ID greater than class greater than tag. But, reading the MDN docs on how the CSS Cascade truly works was an eye-opener.
There’s a code I worked on in an old codebase provided by a client, which looked something like this:
/* Legacy code */
#main-content .product-grid button.add-to-cart {
background-color: #3a86ff;
color: white;
padding: 10px 15px;
border-radius: 4px;
}
/* 100 lines of other code here */
/* My new CSS */
.btn-primary {
background-color: #4361ee; /* New brand color */
color: white;
padding: 12px 20px;
border-radius: 4px;
box-shadow: 0 2px 5px rgba(0,0,0,0.1);
}
Looking at this code, no way that the .btn-primary class stands a chance against whatever specificity chain of selectors was previously written. As far as specification goes, CSS gives the first selector a specificity score of 1, 2, 1: one point for the ID, two points for the two classes, and one point for the element selector. Meanwhile, the second selector is scored as 0, 1, 0 since it only consists of a single class selector.
Sure, I had some options:
I could use !important on the properties in .btn-primary to override the ones declared in the stronger selector, but the moment that happens, be prepared to use it everywhere. So, I’d rather avoid it.
I could try going more specific, but personally, that’s just being cruel to the next developer (who might even be me).
I could change the styles of the existing code, but that’s adding to the specificity problem:
#main-content .product-grid .btn-primary {
/* edit styles directly */
}
Eventually, I ended up writing the whole CSS from scratch.
When nesting was introduced, I tried it to control specificity that way:
.profile-widget {
// … other styles
.header {
// … header styles
.user-avatar {
border: 2px solid blue;
&.is-admin {
border-color: gold; // This becomes .profile-widget .header .user-avatar.is-admin
}
}
}
}
And just like that, I have unintentionally created high-specificity rules. That’s how easily and naturally we can drift toward specificity complexities.
So, to save myself a lot of these issues, I have one principle I always abide by: keep specificity as low as possible. And if the selector complexity is becoming a complex chain, I rethink the whole thing.
BEM: The OG System
The Block-Element-Modifier (BEM, for short) has been around the block (pun intended) for a long time. It is a methodological system for writing CSS that forces you to make every style hierarchy explicit.
/* Block */
.panel {}
/* Element that depends on the Block */
.panel__header {}
.panel__content {}
.panel__footer {}
/* Modifier that changes the style of the Block */
.panel–highlighted {}
.panel__button–secondary {}
When I first experienced BEM, I thought it was amazing, despite contrary opinions that it looked ugly. I had no problems with the double hyphens or underscores because they made my CSS predictable and simplified.
How BEM Handles Specificity
Take a look at these examples. Without BEM:
/* Specificity: 0, 3, 0 */
.site-header .main-nav .nav-link {
color: #472EFE;
text-decoration: none;
}
/* Specificity: 0, 2, 0 */
.nav-link.special {
color: #FF5733;
}
With BEM:
/* Specificity: 0, 1, 0 */
.main-nav__link {
color: #472EFE;
text-decoration: none;
}
/* Specificity: 0, 1, 0 */
.main-nav__link–special {
color: #FF5733;
}
You see how BEM makes the code look predictable as all selectors are created equal, thus making the code easier to maintain and extend. And if I want to add a button to .main-nav, I just add .main-nav__btn, and if I need a disabled button (modifier), .main-nav__btn–disabled. Specificity is low, as I don’t have to increase it or fight the cascade; I just write a new class.
BEM’s naming principle made sure components lived in isolation, which, for a part of CSS, the specificity part, it worked, i.e, .card__title class will never accidentally clash with a .menu__title class.
Where BEM Falls Short
I like the idea of BEM, but it is not perfect, and a lot of people noticed it:
The class names can get really long.
<div class=”product-carousel__slide–featured product-carousel__slide–on-sale”>
<!– yikes –>
</div>
Reusability might not be prioritized, which somewhat contradicts the native CSS ideology. Should a button inside a card be .card__button or reuse a global .button class? With the former, styles are being duplicated, and with the latter, the BEM strict model is being broken.
One of the core pains in software development starts becoming a reality — naming things. I’m sure you know the frustration of that already.
BEM is good, but sometimes you may need to be flexible with it. A hybrid system (maybe using BEM for core components but simpler classes elsewhere) can still keep specificity as low as needed.
/* Base button without BEM */
.button {
/* Button styles */
}
/* Component-specific button with BEM */
.card__footer .button {
/* Minor overrides */
}
Utility Classes: Specificity By Avoidance
This is also called Atomic CSS. And in its entirety, it avoids specificity.
<button class=”bg-red-300 hover:bg-red-500 text-white py-2 px-4 rounded”>
A button
</button>
The idea behind utility-first classes is that every utility class has the same specificity, which is one class selector. Each class is a tiny CSS property with a single purpose.
p-2? Padding, nothing more. text-red? Color red for text. text-center? Text alignment. It’s like how LEGOs work, but for styling. You stack classes on top of each other until you get your desired appearance.
How Utility Classes Handle Specificity
Utility classes do not solve specificity, but rather, they take the BEM ideology of low specificity to the extreme. Almost all utility classes have the same lowest possible specificity level of (0, 1, 0). And because of this, overrides become easy; if more padding is needed, bump .p-2 to .p-4.
Another example:
<button class=”bg-orange-300 hover:bg-orange-700″>
This can be hovered
</button>
If another class, hover:bg-red-500, is added, the order matters for CSS to determine which to use. So, even though the utility classes avoid specificity, the other parts of the CSS Cascade come in, which is the order of appearance, with the last matching selector declared being the winner.
Utility Class Trade-Offs
The most common issue with utility classes is that they make the code look ugly. And frankly, I agree. But being able to picture what a component looks like without seeing it rendered is just priceless.
There’s also the argument of reusability, that you repeat yourself every single time. But once one finds a repetition happening, just turn that part into a reusable component. It also has its genuine limitations when it comes to specificity:
If your brand color changes, which is a global change, and you’re deep in the codebase, you can’t just change one and have others follow like native CSS.
The parent-child relationship that happens naturally in native CSS is out the window due to how atomic utility classes behave.
Some argue the HTML part should be left as markup and the CSS part for styling. Because now, there’s more markup to scan, and if you decide to clean up:
<!– Too long –>
<div class=”p-4 bg-yellow-100 border border-yellow-300 text-yellow-800 rounded”>
<!– Better? –>
<div class=”alert-warning”>
Just like that, we’ve ended up writing CSS. Circle of life.
In my experience with utility classes, they work best for:
Speed
Writing the markup, styling it, and seeing the result swiftly.
Predictability
A utility class does exactly what it says it does.
Cascade Layers: Specificity By Design
Now, this is where it gets interesting. BEM offers structure, utility classes gain speed, and CSS Cascade Layers give us something paramount: control.
Anyways, Cascade Layers (@layers) groups styles and declares what order the groups should be, regardless of the specificity scores of those rules.
Looking at a set of independent rulesets:
button {
background-color: orange; /* Specificity: 0, 0, 1 */
}
.button {
background-color: blue; //* Specificity: 0, 1, 0*/
}
#button {
background-color: red; /* Specificity: 1, 0, 0 */
}
/* No matter what, the button is red */
But with @layer, let’s say, I want to prioritize the .button class selector. I can shape how the specificity order should go:
@layer utilities, defaults, components;
@layer defaults {
button {
background-color: orange; /* Specificity: 0, 0, 1 */
}
}
@layer components {
.button {
background-color: blue; //* Specificity: 0, 1, 0*/
}
}
@layer utilities {
#button {
background-color: red; /* Specificity: 1, 0, 0 */
}
}
Due to how @layer works, .button would win because the components layer is the highest priority, even though #button has higher specificity. Thus, before CSS could even check the usual specificity rules, the layer order would first be respected.
You just have to respect the folks over at W3C, because now one can purposely override an ID selector with a simple class, without even using !important. Fascinating.
Cascade Layers Nuances
Here are some things that are worth calling out when we’re talking about CSS Cascade Layers:
Specificity is still part of the game.
!important acts differently than expected in @layer (they work in reverse!).
@layers aren’t selector-specific but rather style-property-specific.
@layer base {
.button {
background-color: blue;
color: white;
}
}
@layer theme {
.button {
background-color: red;
/* No color property here, so white from base layer still applies */
}
}
@layer can easily be abused. I’m sure there’s a developer out there with over 20+ layer declarations that’s grown into a monstrosity.
Comparing All Three
Now, for the TL;DR folks out there, here’s a side-by-side comparison of the three: BEM, utility classes, and CSS Cascade Layers.
Feature
BEM
Utility Classes
Cascade Layers
Core Idea
Namespace components
Single purpose classes
Control cascade order
Specificity Control
Low and flat
Avoids entirely
Absolute control due to Layer supremacy
Code Readability
Clear structure due to naming
Unclear if unfamiliar with the class names
Clear if layer structure is followed
HTML Verbosity
Moderate class names (can get long)
Many small classes that adds up quickly
No direct impact, stays only in CSS
CSS Organization
By component
By property
By priority order
Learning Curve
Requires understanding conventions
Requires knowing the utility names
Easy to pick up, but requires a deep understanding of CSS
Tools Dependency
Pure CSS
Often depends of third-party e.g Tailwind
Native CSS
Refactoring Ease
High
Medium
Low
Best Use Case
Design Systems
Fast builds
Legacy code or third-party codes that need overrides
Browser Support
All
All
All (except IE)
Among the three, each has its sweet spot:
BEM is best when:
There’s a clear design system that needs to be consistent,
There’s a team with different philosophies about CSS (BEM can be the middle ground), and
Styles are less likely to leak between components.
Utility classes work best when:
You need to build fast, like prototypes or MVPs, and
Using a component-based JavaScript framework like React.
Cascade Layers are most effective when:
Working on legacy codebases where you need full specificity control,
You need to integrate third-party libraries or styles from different sources, and
Working on a large, complex application or projects with long-term maintenance.
If I had to choose or rank them, I’d go for utility classes with Cascade Layers over using BEM. But that’s just me!
Where They Intersect (How They Can Work Together)
Among the three, Cascade Layers should be seen as an orchestrator, as it can work with the other two strategies. @layer is a fundamental tenet of the CSS Cascade’s architecture, unlike BEM and utility classes, which are methodologies for controlling the Cascade’s behavior.
/* Cascade Layers + BEM */
@layer components {
.card__title {
font-size: 1.5rem;
font-weight: bold;
}
}
/* Cascade Layers + Utility Classes */
@layer utilities {
.text-xl {
font-size: 1.25rem;
}
.font-bold {
font-weight: 700;
}
}
On the other hand, using BEM with utility classes would just end up clashing:
<!– This feels wrong –>
<div class=”card__container p-4 flex items-center”>
<p class=”card__title text-xl font-bold”>Something seems wrong</p>
</div>
I’m putting all my cards on the table: I’m a utility-first developer. And most utility class frameworks use @layer behind the scenes (e.g., Tailwind). So, those two are already together in the bag.
But, do I dislike BEM? Not at all! I’ve used it a lot and still would, if necessary. I just find naming things to be an exhausting exercise.
That said, we’re all different, and you might have opposing thoughts about what you think feels best. It truly doesn’t matter, and that’s the beauty of this web development space. Multiple routes can lead to the same destination.
Conclusion
So, when it comes to comparing BEM, utility classes, and CSS Cascade Layers, is there a true “winning” approach for controlling specificity in the Cascade?
First of all, CSS Cascade Layers are arguably the most powerful CSS feature that we’ve gotten in years. They shouldn’t be confused with BEM or utility classes, which are strategies rather than part of the CSS feature set.
That’s why I like the idea of combining either BEM with Cascade Layers or utility classes with Cascade Layers. Either way, the idea is to keep specificity low and leverage Cascade Layers to set priorities on those styles.
Original Source: https://www.hongkiat.com/blog/best-ai-tools-browser-automation/
AI has transformed how we interact with the web such as how we could handle some browser tasks. From data extraction and form submissions to workflow automation, AI-powered tools can handle these processes easily.
So instead of manually clicking through pages or copying information, you can use these tools to automate these tasks to save time and streamline your workflow.
In this article, we’ve curated and tested some of the browser automation tools available today. If you’re a developer, researcher, or business professional, I’m sure you’ll appreciate these tools as they can help you work more efficiently.
Without further ado, let’s check them out.
1. BrowserUse
BrowserUse is an open-source tool designed to enable AI agents to interact with web browsers. This allows the AI agents to perform tasks within the browser environment, such as navigating websites, extracting information, and interacting with the webapps.
It supports various models including OpenAI, Antrhopic, Gemini, DeepSeek, and even Ollama.
You can use it for a wide range of tasks, from web scraping, making a purchase, applying for a job, sending email, saving files, and a lot more. And as it is backed with Playwright, it is compatible with all the browsers that Playwright supports including Chromium, Firefox, and Safari.
BrowserUse provides a number of examples and use cases in their repository, which you can learn or take an inspiration from. Below is an example how it can apply for a job for you.
Pros
Supports multiple AI models including Ollama.
Compatible with all browsers supported by Playwright.
Cons
Requires Python, and some other technical knowledge to set up and use
2. Stagehand
Stagehand is an AI-powerd web browsing framework designed to simplify and improve browser automation tasks.
It allows you to convert natural language instructions into headless browser operations more efficiently. This not only reduces the complexity traditionally associated with browser automation but also could speed up your development workflows.
Stagehand also runs with Playwright under the hood. But what makes it different is that it provides an easy to follow API in JavaScript which makes it easier to integrate with your existing JavaScript-based projects.
You can use it to automate a wide range of tasks, from web scraping to testing and monitoring. Checkout how easy it is to use it.
Pros
Easy to install with NPX package
Easy to use API in JavaScript
Supports a wide range of browser automation tasks
Cons
Only supports OpenAI and Anthropic AI models
3. Skyvern
Skyvern is a tool that use LLMs and computer vision to automate workflows across various browsers.
It comes with several AI agents designed to handle different tasks:
The 2FA Agent, which is capable of handling two-factor authentication
The Auto-complete Agent, which is capable of filling out forms with dynamic auto-complete features
The Data Extraction Agent, which is to extract information on the website like text and table and organize them in proper formatting.
The Interactable Element Agent, which capable of parsing the HTML to identify elements like buttons, links, and input fields that can be interacted with.
The Password Agent, which is capable of managing sensitive inputs such as usernames and password
It combines prompts, computer vision, and these intelligent agents to analyze and interact with web pages in real time. This allows it to navigate and automate tasks on websites it has never seen before without needing custom code by mapping visual elements to the actions required for a given workflow.
It supports a wide range of AI models, including OpenAI, Anthropic, AWS Bedrock, and it will soon also include Ollama, and Gemini.
Pros
An advanced tool that comes with anti-bot detection mechanisms, proxy network, and CAPTCHA solving to allow you to complete more complicated workflows.
Supports various different AI models.
Provides a user-friendly interface to create and manage the automatic workflows.
Backed with Playwright under the hood, which allows it to work with different browsers including Chrome, Firefox, and Safari.
Cons
Requires some technical knowledge to use it on self-host setup.
4. Shortest
Shortest is an open-source, AI-powered testing framework that allows you to write end-to-end tests using plain English instruction.
This allows you to focus on describing your test scenarios, while Shortest handles the implementation details. For example, using the shortest function, you can specify actions like logging into an application with a username and password.
import { shortest } from ‘@antiwork/shortest’
shortest(‘Login to the app using email and password’, {
username: process.env.GITHUB_USERNAME,
password: process.env.GITHUB_PASSWORD
})
It is built on top of Playwright, and provides seamless GitHub integration for continuous integration and deployment workflows.
See how it works in action below.
Pros
Designed specifically for E2E testing
Provides JavaScript API
Seamless Github and Playwright integration, which makes it easier to adopt it, if you’re already using these tools
Cons
It’s designed only for automating E2E testing. If you’re looking to automate other browser tasks, you might want to consider other tools
5. Automa
Automa is a free, open-source browser extension designed to automate various web tasks such as auto-filling forms, taking screenshots, scraping data from websites, and downloading assets.
Automating browser tasks is pretty simple.
It provides a user-friendly, low-code interface that allows you to create automation workflows by connecting different blocks. It also has a workflow recording feature that captures your actions automatically, and the marketplace features numerous shared workflows that you can add and customize to suit your needs.
Even though it is not an AI-powered tool per se, it’s the ease of use that makes it on the list, and it also provides a custom block where you can put your own functions to integrate with AI services such as OpenAI, Claude, or DeepSeek.
It is available both for Chrome and Firefox browsers, and you can install it directly from their respective extension stores.
Pros
Comes as browser extensions. It’s very easy to install it.
Provides a user-friendly interface to create automation workflows
Supports custom blocks to integrate with external AI services
Cons
Since it’s not an AI-powered tool per se, it might not be as advanced as other tools on the list
Wrapping Up
AI-powered tools can help you automate your browser tasks, saving you time and streamlining your workflow. In this article, we’ve curated some of the best AI-powered tools available today that are free and open-source.
Give them a try and see how they can help you work more efficiently.
The post 5 AI-Powered Tools to Automate Your Browser Tasks appeared first on Hongkiat.