Field Notes' latest limited edition: Foiled Again

Original Source: https://abduzeedo.com/field-notes-latest-limited-edition-foiled-again

Field Notes’ latest limited edition: Foiled Again
Field Notes' latest limited edition: Foiled Again

abduzeedo0905—23

The 59th Quarterly Edition from Field Notes titled “Foiled Again,” showcases a revered printing technique. Over the years, various Field Notes editions have incorporated the art of hot-foil stamping – a method where metallic or pigmented foils are applied to paper through heat and pressure. This process, initially patented by Ernst Oeser from Germany in 1892, modernized the traditional gold leaf manual application in the publishing sector. The 1950s witnessed further refinement as plastics enhanced these foils, introducing innovative applications in printing, packaging, and related fields.

The distinction of “Foiled Again” lies in its collaboration with Studio on Fire, based in St. Paul, Minn. Renowned for their foil-stamping and die-cutting precision, Studio on Fire’s craftsmanship is often deemed unparalleled. Their accolades span across sectors, including cannabis, alcohol, and playing cards, where they’ve masterfully blended multiple foils, inks, embossing, and diecuts. Abduzeedo.com shared insights from a visit to Studio on Fire, praising their blend of traditional and modern technologies and the adept team steering the helm.

The edition features always awesome artwork by Aaron Draplin, highlighting the production process. This artwork was hot-stamped in silver foil on the newly introduced Neenah Pearl “Indigo” 110# cover stock by Studio on Fire. Inside, it houses 48 pages of 60# Finch Opaque text, lined with silver ink and bound together with three staples.

An added touch to this edition is a complementary tuck box, reversing the cover art’s colors: using blue foil on Neenah Pearl “Sterling” stock. This box safeguards three Field Notes notebooks, ensuring they remain pristine until used. However, potential buyers should be aware of the limited print run due to the edition’s intricate design.

A special gesture for their subscribers saw Field Notes requesting Studio on Fire to use gold foil on “Poppy” stock, encased in a red-foil-on-“Bright Gold” box. Subscribers are set to receive both the “Indigo”/Silver and exclusive “Poppy”/Gold “Foiled Again” 3-Packs in the unique 2023 subscriber box. Future editions for Fall, Winter, and Spring are eagerly anticipated.

 

 

Product images

Field Notes' latest limited edition: Foiled Again product image

 

 

Field Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product imageField Notes' latest limited edition: Foiled Again product image

Specifications

01. Proudly foil-stamped by the good people of Studio on Fire, St. Paul, Minn., and printed by the good people of Active Graphics, Inc., Oak Brook, Ill.
02. Cover: Neenah Pearl 105#C “Indigo” (Subscriber edition: “Poppy”) foil stamped on a Saroglia FUB Die Cutter Foil Stamper.
03. Innards: Finch Paper Opaque Smooth 60#T “Bright White,” with a fine, 1-color application of metallic soy-based Toyo ink.
04. Inside covers/innards printed on a Mitsubishi Diamond Series 40″ 6-color press.
05. Bound by a Heidelberg ST350 Stitchmaster 8-pocket saddle stitcher, with appreciation to Samuel Slocum, George W. McGill, and William J. Brown, the “Founding Fathers of the Staple.”
06. Corners precisely rounded to 3/8″ (9.5mm) by a CRC round-corner machine.
07. Ruled lines: 1/4″ (6.4mm).
08. Memo book dimensions are 3-1/2″ × 5-1/2″ (89mm × 140mm).
09. FIELD NOTES uses only the Futura typeface family (Paul Renner, 1927) in its materials.
10. All FIELD NOTES memo books are printed and manufactured in the U.S.A.
11.Limited edition of 26,000 3-Packs
12.UPC: 850032279246
About Field Notes

Founded in 2007, Field Notes is a memo book brand headquartered in Chicago, and is a joint venture between Portland, Oregon-based Draplin Design Company and Chicago-based design firm Coudal Partners. Its memo books are proudly printed and manufactured in the United States. Learn more at fieldnotesbrand.com.

Generating Real-Time Audio Sentiment Analysis With AI

Original Source: https://smashingmagazine.com/2023/09/generating-real-time-audio-sentiment-analysis-ai/

In the previous article, we developed a sentiment analysis tool that could detect and score emotions hidden within audio files. We’re taking it to the next level in this article by integrating real-time analysis and multilingual support. Imagine analyzing the sentiment of your audio content in real-time as the audio file is transcribed. In other words, the tool we are building offers immediate insights as an audio file plays.

So, how does it all come together? Meet Whisper and Gradio — the two resources that sit under the hood. Whisper is an advanced automatic speech recognition and language detection library. It swiftly converts audio files to text and identifies the language. Gradio is a UI framework that happens to be designed for interfaces that utilize machine learning, which is ultimately what we are doing in this article. With Gradio, you can create user-friendly interfaces without complex installations, configurations, or any machine learning experience — the perfect tool for a tutorial like this.

By the end of this article, we will have created a fully-functional app that:

Records audio from the user’s microphone,
Transcribes the audio to plain text,
Detects the language,
Analyzes the emotional qualities of the text, and
Assigns a score to the result.

Note: You can peek at the final product in the live demo.

Automatic Speech Recognition And Whisper

Let’s delve into the fascinating world of automatic speech recognition and its ability to analyze audio. In the process, we’ll also introduce Whisper, an automated speech recognition tool developed by the OpenAI team behind ChatGPT and other emerging artificial intelligence technologies. Whisper has redefined the field of speech recognition with its innovative capabilities, and we’ll closely examine its available features.

Automatic Speech Recognition (ASR)

ASR technology is a key component for converting speech to text, making it a valuable tool in today’s digital world. Its applications are vast and diverse, spanning various industries. ASR can efficiently and accurately transcribe audio files into plain text. It also powers voice assistants, enabling seamless interaction between humans and machines through spoken language. It’s used in myriad ways, such as in call centers that automatically route calls and provide callers with self-service options.

By automating audio conversion to text, ASR significantly saves time and boosts productivity across multiple domains. Moreover, it opens up new avenues for data analysis and decision-making.

That said, ASR does have its fair share of challenges. For example, its accuracy is diminished when dealing with different accents, background noises, and speech variations — all of which require innovative solutions to ensure accurate and reliable transcription. The development of ASR systems capable of handling diverse audio sources, adapting to multiple languages, and maintaining exceptional accuracy is crucial for overcoming these obstacles.

Whisper: A Speech Recognition Model

Whisper is a speech recognition model also developed by OpenAI. This powerful model excels at speech recognition and offers language identification and translation across multiple languages. It’s an open-source model available in five different sizes, four of which have an English-only variant that performs exceptionally well for single-language tasks.

What sets Whisper apart is its robust ability to overcome ASR challenges. Whisper achieves near state-of-the-art performance and even supports zero-shot translation from various languages to English. Whisper has been trained on a large corpus of data that characterizes ASR’s challenges. The training data consists of approximately 680,000 hours of multilingual and multitask supervised data collected from the web.

The model is available in multiple sizes. The following table outlines these model characteristics:

Size
Parameters
English-only model
Multilingual model
Required VRAM
Relative speed

Tiny
39 M
tiny.en
tiny
~1 GB
~32x

Base
74 M
base.en
base
~1 GB
~16x

Small
244 M
small.en
small
~2 GB
~6x

Medium
769 M
medium.en
medium
~5 GB
~2x

Large
1550 M
N/A
large
~10 GB
1x

For developers working with English-only applications, it’s essential to consider the performance differences among the .en models — specifically, tiny.en and base.en, both of which offer better performance than the other models.

Whisper utilizes a Seq2seq (i.e., transformer encoder-decoder) architecture commonly employed in language-based models. This architecture’s input consists of audio frames, typically 30-second segment pairs. The output is a sequence of the corresponding text. Its primary strength lies in transcribing audio into text, making it ideal for “audio-to-text” use cases.

Real-Time Sentiment Analysis

Next, let’s move into the different components of our real-time sentiment analysis app. We’ll explore a powerful pre-trained language model and an intuitive user interface framework.

Hugging Face Pre-Trained Model

I relied on the DistilBERT model in my previous article, but we’re trying something new now. To analyze sentiments precisely, we’ll use a pre-trained model called roberta-base-go_emotions, readily available on the Hugging Face Model Hub.

Gradio UI Framework

To make our application more user-friendly and interactive, I’ve chosen Gradio as the framework for building the interface. Last time, we used Streamlit, so it’s a little bit of a different process this time around. You can use any UI framework for this exercise.

I’m using Gradio specifically for its machine learning integrations to keep this tutorial focused more on real-time sentiment analysis than fussing with UI configurations. Gradio is explicitly designed for creating demos just like this, providing everything we need — including the language models, APIs, UI components, styles, deployment capabilities, and hosting — so that experiments can be created and shared quickly.

Initial Setup

It’s time to dive into the code that powers the sentiment analysis. I will break everything down and walk you through the implementation to help you understand how everything works together.

Before we start, we must ensure we have the required libraries installed and they can be installed with npm. If you are using Google Colab, you can install the libraries using the following commands:

!pip install gradio
!pip install transformers
!pip install git+https://github.com/openai/whisper.git

Once the libraries are installed, we can import the necessary modules:

import gradio as gr
import whisper
from transformers import pipeline

This imports Gradio, Whisper, and pipeline from Transformers, which performs sentiment analysis using pre-trained models.

Like we did last time, the project folder can be kept relatively small and straightforward. All of the code we are writing can live in an app.py file. Gradio is based on Python, but the UI framework you ultimately use may have different requirements. Again, I’m using Gradio because it is deeply integrated with machine learning models and APIs, which is ideal for a tutorial like this.

Gradio projects usually include a requirements.txt file for documenting the app, much like a README file. I would include it, even if it contains no content.

To set up our application, we load Whisper and initialize the sentiment analysis component in the app.py file:

model = whisper.load_model(“base”)

sentiment_analysis = pipeline(
“sentiment-analysis”,
framework=”pt”,
model=”SamLowe/roberta-base-go_emotions”
)

So far, we’ve set up our application by loading the Whisper model for speech recognition and initializing the sentiment analysis component using a pre-trained model from Hugging Face Transformers.

Defining Functions For Whisper And Sentiment Analysis

Next, we must define four functions related to the Whisper and pre-trained sentiment analysis models.

Function 1: analyze_sentiment(text)

This function takes a text input and performs sentiment analysis using the pre-trained sentiment analysis model. It returns a dictionary containing the sentiments and their corresponding scores.

def analyze_sentiment(text):
results = sentiment_analysis(text)
sentiment_results = {
result[’label’]: result[’score’] for result in results
}
return sentiment_results

Function 2: get_sentiment_emoji(sentiment)

This function takes a sentiment as input and returns a corresponding emoji used to help indicate the sentiment score. For example, a score that results in an “optimistic” sentiment returns a “😊” emoji. So, sentiments are mapped to emojis and return the emoji associated with the sentiment. If no emoji is found, it returns an empty string.

def get_sentiment_emoji(sentiment):
# Define the mapping of sentiments to emojis
emoji_mapping = {
“disappointment”: “😞”,
“sadness”: “😢”,
“annoyance”: “😠”,
“neutral”: “😐”,
“disapproval”: “👎”,
“realization”: “😮”,
“nervousness”: “😬”,
“approval”: “👍”,
“joy”: “😄”,
“anger”: “😡”,
“embarrassment”: “😳”,
“caring”: “🤗”,
“remorse”: “😔”,
“disgust”: “🤢”,
“grief”: “😥”,
“confusion”: “😕”,
“relief”: “😌”,
“desire”: “😍”,
“admiration”: “😌”,
“optimism”: “😊”,
“fear”: “😨”,
“love”: “❤️”,
“excitement”: “🎉”,
“curiosity”: “🤔”,
“amusement”: “😄”,
“surprise”: “😲”,
“gratitude”: “🙏”,
“pride”: “🦁”
}
return emoji_mapping.get(sentiment, “”)

Function 3: display_sentiment_results(sentiment_results, option)

This function displays the sentiment results based on a selected option, allowing users to choose how the sentiment score is formatted. Users have two options: show the score with an emoji or the score with an emoji and the calculated score. The function inputs the sentiment results (sentiment and score) and the selected display option, then formats the sentiment and score based on the chosen option and returns the text for the sentiment findings (sentiment_text).

def display_sentiment_results(sentiment_results, option):
sentiment_text = “”
for sentiment, score in sentiment_results.items():
emoji = get_sentiment_emoji(sentiment)
if option == “Sentiment Only”:
sentiment_text += f”{sentiment} {emoji}n”
elif option == “Sentiment + Score”:
sentiment_text += f”{sentiment} {emoji}: {score}n”
return sentiment_text

Function 4: inference(audio, sentiment_option)

This function performs Hugging Face’s inference process, including language identification, speech recognition, and sentiment analysis. It inputs the audio file and sentiment display option from the third function. It returns the language, transcription, and sentiment analysis results that we can use to display all of these in the front-end UI we will make with Gradio in the next section of this article.

def inference(audio, sentiment_option):
audio = whisper.load_audio(audio)
audio = whisper.pad_or_trim(audio)

mel = whisper.log_mel_spectrogram(audio).to(model.device)

_, probs = model.detect_language(mel)
lang = max(probs, key=probs.get)

options = whisper.DecodingOptions(fp16=False)
result = whisper.decode(model, mel, options)

sentiment_results = analyze_sentiment(result.text)
sentiment_output = display_sentiment_results(sentiment_results, sentiment_option)

return lang.upper(), result.text, sentiment_output

Creating The User Interface

Now that we have the foundation for our project — Whisper, Gradio, and functions for returning a sentiment analysis — in place, all that’s left is to build the layout that takes the inputs and displays the returned results for the user on the front end.

The following steps I will outline are specific to Gradio’s UI framework, so your mileage will undoubtedly vary depending on the framework you decide to use for your project.

Defining The Header Content

We’ll start with the header containing a title, an image, and a block of text describing how sentiment scoring is evaluated.

Let’s define variables for those three pieces:

title = “””🎤 Multilingual ASR 💬”””
image_path = “/content/thumbnail.jpg”

description = “””
💻 This demo showcases a general-purpose speech recognition model called Whisper. It is trained on a large dataset of diverse audio and supports multilingual speech recognition and language identification tasks.

📝 For more details, check out the [GitHub repository](https://github.com/openai/whisper).

⚙️ Components of the tool:

     – Real-time multilingual speech recognition
     – Language identification
     – Sentiment analysis of the transcriptions

🎯 The sentiment analysis results are provided as a dictionary with different emotions and their corresponding scores.

😃 The sentiment analysis results are displayed with emojis representing the corresponding sentiment.

✅ The higher the score for a specific emotion, the stronger the presence of that emotion in the transcribed text.

❓ Use the microphone for real-time speech recognition.

⚡️ The model will transcribe the audio and perform sentiment analysis on the transcribed text.
“””

Applying Custom CSS

Styling the layout and UI components is outside the scope of this article, but I think it’s important to demonstrate how to apply custom CSS in a Gradio project. It can be done with a custom_css variable that contains the styles:

custom_css = “””
#banner-image {
display: block;
margin-left: auto;
margin-right: auto;
}
#chat-message {
font-size: 14px;
min-height: 300px;
}
“””

Creating Gradio Blocks

Gradio’s UI framework is based on the concept of blocks. A block is used to define layouts, components, and events combined to create a complete interface with which users can interact. For example, we can create a block specifically for the custom CSS from the previous step:

block = gr.Blocks(css=custom_css)

Let’s apply our header elements from earlier into the block:

block = gr.Blocks(css=custom_css)

with block:
gr.HTML(title)

with gr.Row():
with gr.Column():
gr.Image(image_path, elem_id=”banner-image”, show_label=False)
with gr.Column():
gr.HTML(description)

That pulls together the app’s title, image, description, and custom CSS.

Creating The Form Component

The app is based on a form element that takes audio from the user’s microphone, then outputs the transcribed text and sentiment analysis formatted based on the user’s selection.

In Gradio, we define a Group() containing a Box() component. A group is merely a container to hold child components without any spacing. In this case, the Group() is the parent container for a Box() child component, a pre-styled container with a border, rounded corners, and spacing.

with gr.Group():
with gr.Box():

With our Box() component in place, we can use it as a container for the audio file form input, the radio buttons for choosing a format for the analysis, and the button to submit the form:

with gr.Group():
with gr.Box():
# Audio Input
audio = gr.Audio(
label=”Input Audio”,
show_label=False,
source=”microphone”,
type=”filepath”
)

# Sentiment Option
sentiment_option = gr.Radio(
choices=[“Sentiment Only”, “Sentiment + Score”],
label=”Select an option”,
default=”Sentiment Only”
)

# Transcribe Button
btn = gr.Button(“Transcribe”)

Output Components

Next, we define Textbox() components as output components for the detected language, transcription, and sentiment analysis results.

lang_str = gr.Textbox(label=”Language”)
text = gr.Textbox(label=”Transcription”)
sentiment_output = gr.Textbox(label=”Sentiment Analysis Results”, output=True)

Button Action

Before we move on to the footer, it’s worth specifying the action executed when the form’s Button() component — the “Transcribe” button — is clicked. We want to trigger the fourth function we defined earlier, inference(), using the required inputs and outputs.

btn.click(
inference,
inputs=[
audio,
sentiment_option
],
outputs=[
lang_str,
text,
sentiment_output
]
)

Footer HTML

This is the very bottom of the layout, and I’m giving OpenAI credit with a link to their GitHub repository.

gr.HTML(’’’
<div class=”footer”>
<p>Model by <a href=”https://github.com/openai/whisper” style=”text-decoration: underline;” target=”_blank”>OpenAI</a>
</p>
</div>
’’’)

Launch the Block

Finally, we launch the Gradio block to render the UI.

block.launch()

Hosting & Deployment

Now that we have successfully built the app’s UI, it’s time to deploy it. We’ve already used Hugging Face resources, like its Transformers library. In addition to supplying machine learning capabilities, pre-trained models, and datasets, Hugging Face also provides a social hub called Spaces for deploying and hosting Python-based demos and experiments.

You can use your own host, of course. I’m using Spaces because it’s so deeply integrated with our stack that it makes deploying this Gradio app a seamless experience.

In this section, I will walk you through Space’s deployment process.

Creating A New Space

Before we start with deployment, we must create a new Space.

The setup is pretty straightforward but requires a few pieces of information, including:

A name for the Space (mine is “Real-Time-Multilingual-sentiment-analysis”),
A license type for fair use (e.g., a BSD license),
The SDK (we’re using Gradio),
The hardware used on the server (the “free” option is fine), and
Whether the app is publicly visible to the Spaces community or private.

Once a Space has been created, it can be cloned, or a remote can be added to its current Git repository.

Deploying To A Space

We have an app and a Space to host it. Now we need to deploy our files to the Space.

There are a couple of options here. If you already have the app.py and requirements.txt files on your computer, you can use Git from a terminal to commit and push them to your Space by following these well-documented steps. Or, If you prefer, you can create app.py and requirements.txt directly from the Space in your browser.

Push your code to the Space, and watch the blue “Building” status that indicates the app is being processed for production.

Final Demo

Conclusion

And that’s a wrap! Together, we successfully created and deployed an app capable of converting an audio file into plain text, detecting the language, analyzing the transcribed text for emotion, and assigning a score that indicates that emotion.

We used several tools along the way, including OpenAI’s Whisper for automatic speech recognition, four functions for producing a sentiment analysis, a pre-trained machine learning model called roberta-base-go_emotions that we pulled from the Hugging Space Hub, Gradio as a UI framework, and Hugging Face Spaces to deploy the work.

How will you use these real-time, sentiment-scoping capabilities in your work? I see so much potential in this type of technology that I’m interested to know (and see) what you make and how you use it. Let me know in the comments!

Further Reading On SmashingMag

“The Future Of Design: Human-Powered Or AI-Driven?,” Keima Kai
“Motion Controls In The Browser,” Yaphi Berhanu
“JavaScript APIs You Don’t Know About,” Juan Diego Rodríguez
“The Safest Way To Hide Your API Keys When Using React,” Jessica Joseph

An Introduction to the Laravel PHP Framework

Original Source: https://www.sitepoint.com/laravel-introduction/?utm_source=rss

An Introduction to Laravel

Learn about the Laravel PHP framework, exploring its history, its purpose, and some of its key components and features.

Continue reading
An Introduction to the Laravel PHP Framework
on SitePoint.

Build a GraphQL Gateway: Combine, Stitch or Merge any Datasource

Original Source: https://www.sitepoint.com/graphql-gateway-combine-stitch-merge/?utm_source=rss

Building a GraphQL Gateway

Learn how to fetch data from multiple sources, while still keeping your frontend snappy, by building your own GraphQL gateway.

Continue reading
Build a GraphQL Gateway: Combine, Stitch or Merge any Datasource
on SitePoint.

Boom3D for Mac Review: Features, Prices, Pros, and Cons

Original Source: https://www.hongkiat.com/blog/boom3d-mac-review/

If you love music, gaming, or high-quality audio, you’ve probably been disappointed by your device’s speakers at some point. You might be wondering: should you splurge on high-end headphones or a top-notch speaker system?

Before you open your wallet, think about a software solution to elevate your audio experience. We’ve previously discussed several sound booster apps that are making waves.

Today, our spotlight is on Boom3D by Global Delight. This app aims to do more than just pump up the volume. It strives to enhance the overall audio quality, adding depth and clarity. Let’s dive into how this software can upgrade your listening experience on multiple platforms.

What is Boom3D?

Boom3D is a user-friendly app that enhances your audio experience on various desktop platforms, such as Mac and Windows. It offers features like 3D Surround Sound and an equalizer to make your audio more immersive.

Boom3D App Interface

This app smartly adjusts the audio settings for your device, so you don’t need extra equipment. It has preset options for different activities like watching movies or listening to music. You can also manage the volume for each app on your computer individually. In short, Boom3D makes improving your audio simple and effective.

Now, let’s explore the features of Boom3D in more detail.

Visit Boom3D

Key Features of Boom3D
Immersive 3D Sound Experience

Boom3D employs cutting-edge technology to deliver a 3D surround sound experience, making you feel like you’re at the center of the action. This enhances your enjoyment of music, movies, and games. You can also customize the 3D effects and bass to your liking.

3D Sound Visualization
Customizable 31-Band Equalizer

The 31-Band Equalizer allows you to tailor your audio experience to your taste. Whether you love classical or rock music, you can easily adjust the sound settings. The app also provides preset options for different music genres, helping you quickly find the ideal sound for your mood.

31-Band Equalizer Interface
Volume Boost for Mac

If you’re a Mac user, Boom3D offers a volume booster that amplifies your computer’s sound without sacrificing quality. This is especially handy when you want to fully immerse yourself in movies or music.

Ambient and Night Mode Sound Effects

The Ambient feature adds depth to your audio, making your games and movies more engaging. It enhances background noises for a richer experience.

Night Mode is perfect for late-night movie or show watching. It tones down loud sounds while boosting softer ones, so you can enjoy your content without bothering others. You can also adjust the balance to suit your needs.

All-in-One Audio Player

Beyond enhancing your audio, Boom3D also serves as a full-fledged music player. You can play songs stored on your computer, create playlists, and manage your music library.

Advanced Audio Player Interface
Access to 20,000+ Internet Radio Stations

Boom3D gives you free access to an extensive selection of internet radio stations from around the globe. This feature lets you discover new music from various genres and countries.

Internet Radio Station Interface
User Experience
Easy-to-Use Interface

Boom3D is convenient because it can run in the background and be controlled from the menu bar. This clears up space on my dock. All I had to do was go to settings and disable the “Show dock icon” option. Now, I can easily toggle features without opening the full application.

Screenshot of Boom3D's user-friendly interface
High-Quality Audio

The audio quality in Boom3D is impressive, surpassing my Mac’s default settings. Activating the Boom3D engine noticeably enhances the sound. This improvement is consistent across various apps, whether I’m using Apple Music or a local media player.

Flexible Audio Controls

Boom3D offers more than just volume control; it provides a range of audio effects and settings. This is great for me because the default system settings often fall short of my audio quality expectations.

All-Encompassing Audio Features

Boom3D delivers audio enhancements that work system-wide, including 3D Surround Sound, Equalizers, and other effects. I appreciate not having to upload my audio files to the app to enjoy these benefits. To fully utilize the 3D surround sound, a free additional component needs to be installed.

Where to Use Boom3D

Boom3D is a versatile app compatible with various devices. You can use it on Mac and Windows desktops as well as Android and iOS smartphones.

Get Boom3D for:

Mac
Windows

Boom3D Browser Extensions for Netflix

If you’re a Netflix user and browse on Chrome or Safari, Boom3D offers extensions to enhance your experience. These add-ons provide 5.1 surround sound and support 1080p video quality, depending on the content.

Boom3D Netflix Extension Interface

Download: Boom3D 5.1 Surround for Netflix

Boom3D Pricing Details

Boom3D offers a 30-day free trial, allowing you to explore its features before committing. Here’s the pricing breakdown:

For Mac: The app costs $12.51, and you can install it on up to two Macs.
For Windows: The app too is priced at $12.51, and you can install it on up to two Windows PCs.

Note: Each platform version is sold separately. Buying the Mac version won’t give you access to the Windows or mobile versions. To use Boom3D on multiple platforms, you’ll need to purchase each version individually.

FAQ

What Sets Boom 2 Apart from Boom3D?

Boom 2 is tailored for macOS users who want high-quality stereo sound. It features a 31-band equalizer and 20 dB gain, perfect for those who desire fine-tuned audio for music, videos, and games. On the other hand, Boom3D delivers 3D surround sound and is compatible with both macOS and Windows.

Is Boom3D a One-Time Buy or a Subscription?

Boom3D is available on both Windows and Mac for a one-time fee of $12.51. It also comes with a 30-day free trial. For mobile users, Boom offers a one-week free trial, after which you can either make a one-time payment for lifetime access or opt for a subscription.

Is Boom3D Compatible with AirPlay?

No, Boom3D’s special audio features are not compatible with AirPlay or FaceTime. To continue enjoying 3D surround sound, close AirPlay or FaceTime.

How to Install the Boom3D Component Installer?

The Boom3D Component Installer enhances your device’s audio quality. It includes features like 3D Surround Sound and Equalizers. To install, click here. Note: macOS 10.10.3 or later is required.

What Is Boom Remote and How Do I Get It?

Boom Remote is an additional app for Boom 2 and Boom3D. It allows you to control key features and is compatible with popular Mac apps like Spotify and iTunes. It’s available for iOS users and can be downloaded from the Apple Store.

Conclusion

If you’re looking to elevate your audio experience for music, movies, or games, Boom3D is worth a try. It’s user-friendly and offers a free 30-day trial. So, if you’re not satisfied with your current audio setup, Boom3D could be the solution you’ve been searching for.

Lastly, here are my personal pros and cons of Boom3D:

Pros:

Deep bass enhances sound quality
3D Surround Sound for an immersive experience
Free access to over 20,000 radio stations
Various Equalizer Presets for easy customization
Also serves as a versatile audio player

Cons:

No AirPlay support
Desktop purchase doesn’t include premium mobile app

The post Boom3D for Mac Review: Features, Prices, Pros, and Cons appeared first on Hongkiat.

Working on a commission: advice from professional digital artists

Original Source: https://www.creativebloq.com/how-to/working-on-a-commission

A trio of digital artists share their advice for working on a commission.

Connected Grid Layout Animation

Original Source: https://tympanus.net/codrops/2023/08/30/connected-grid-layout-animation/

Some ideas for simple on-scroll animations on “connected” grid layouts.

Are movie posters finally becoming beautiful again?

Original Source: https://www.creativebloq.com/news/painted-movie-posters

From The Killer to Zombie Town, some delightful designs just dropped.

Falling For Oklch: A Love Story Of Color Spaces, Gamuts, And CSS

Original Source: https://smashingmagazine.com/2023/08/oklch-color-spaces-gamuts-css/

I woke up one morning in early 2022 and caught an article called “A Whistle-Stop Tour of 4 New CSS Color Features” over at CSS-Tricks.

Wow, what a gas! A new and wider color gamut! New color spaces! New color functions! New syntaxes! It is truly a lot to take in.

Now, I’m no color expert. But I enjoyed adding new gems to my CSS toolbox and made a note to come back to that article later for a deeper read. That, of course, led to a lot of fun rabbit holes that helped put the CSS Color Module Level 4 updates in a better context for me.

That’s where Oklch comes into the picture. It’s a new color space in CSS that, according to experts smarter than me, offers upwards of 50% more color than the sRGB gamut we have worked with for so long because it supports a wider gamut of color.

Color spaces? Gamuts? These are among many color-related terms I’m familiar with but have never really understood. It’s only now that my head is wrapping around these concepts and how they relate back to CSS, and how I use color in my own work.

That’s what I want to share with you. This article is less of a comprehensive “how-to” guide than it is my own personal journey grokking new CSS color features. I actually like to this of this more as a “love story” where I fall for Oklch.

The Deal With Gamuts And Color Spaces

I quickly learned that there’s no way to understand Oklch without at least a working understanding of the difference between gamuts and color spaces. My novice-like brain thinks of them as the same: a spectrum of colors. In fact, my mind goes straight to the color pickers we all know from apps like Figma and Sketch.

I’ve always assumed that gamut is just a nerdier term for the available colors in a color picker and that a color picker is simply a convenient interface for choosing colors in the gamut.

(Assumed. Just. Simply. Three words you never want to see in the same sentence.)

Apparently not. A gamut really boils down to a range of something, which in this case, is a range of colors. That range might be based on a single point if we think of it on a single axis.

Or it might be a range of multiple coordinates like we would see on a two-axe grid. Now the gamut covers a wider range that originates from the center and can point in any direction.

The levels of those ranges can also constitute an axis, which results in some form of 3D space.

sRGB is a gamut with an available range of colors. Display P3 is another gamut offering a wider range of colors.

So, gamuts are ranges, and ranges need a reference to determine the upper and lower limits of those axes. That’s where we start talking about color spaces. A color space is what defines the format for plotting points on the gamut. While more trained folks certainly have more technical explanations, my basic understanding of color spaces is that they provide the map — or perhaps the “shape” — for the gamut and define how color is manipulated in it. So, sRGB is a color gamut that spans a range of colors, and Hex, RGB, and HSL (among others, of course) are the spaces we have to explore the gamut.

That’s why you may hear a color space as having a “wider” or “narrower” gamut than another — it’s a range of possibilities within a shape.

If I’ve piqued your interest enough, I’ve compiled a list of articles that will give you more thorough definitions of gamuts and color spaces at the end of this article.

Why We Needed New Color Spaces

The short answer is that the sRGB gamut serves as the reference point for color spaces like Hex, RGB, and HSL that provide a narrower color gamut than what is available in the newer Display P3 gamut.

We’re well familiar with many of sRGB-based color notations and functions in CSS. The values are essentially setting points along the gamut space with different types of coordinates.

/* Hex */ #f8a100
/* RGB */ rgb(248, 161, 2)
/* HSL */ hsl(38.79 98% 49%)

For example, the rgb() function is designed to traverse the RGB color space by mixing red, blue, and green values to produce a point along the sRGB gamut.

If the difference between the two ranges in the image above doesn’t strike you as particularly significant or noticeable, that’s fair. I thought they were the same at first. But the Display P3 stripe is indeed a wider and smoother range of colors than the sRGB stripe above it when you examine it up close.

The problem is that Hex, RGB, and HSL (among other existing spaces) only support the sRGB gamut. In other words, they are unable to map colors outside of the range of colors that sRGB offers. That means there’s no way to map them to colors in the Display P3 gamut. The traditional color formats we’ve used for a long time are simply incompatible with the range of colors that has started rolling out in new hardware. We needed a new space to accommodate the colors that new technology is offering us.

Dead Grey Zones

I love this term. It accurately describes an issue with the color spaces in the sRGB gamut — greyish areas between two color points. You can see it in the following demo.

Oklch (as well as the other new spaces in the Level 4 spec) doesn’t have that issue. Hues are more like mountains, each with a different elevation.

That’s why we needed new color spaces — to get around those dead grey zones. And we needed new color functions in CSS to produce coordinates on the space to select from the newly available range of colors.

But there’s a catch. That mountain-shaped gamut of Oklch doesn’t always provide a straight path between color points which could result in clipped or unexpected colors between points. The issue appears to be case-specific depending on the colors in use, but that also seems to indicate that there are situations where using a different color space is going to yield better gradients.

Consistent Lightness

It’s the consistent range of saturation in HSL muddying the waters that leads to another issue along this same train of thought: inconsistent levels of lightness between colors.

The classic example is showing two colors in HSL with the same lightness value:

The Oklab and Oklch color spaces were created to fix that shift. Black is more, well, black because the hues are more consistent in Oklab and Oklch than they are in LAB and LCH.

So, that’s why it’s likely better to use the oklch() and oklab() functions in CSS than it is to use their lch() and lab() counterparts. There’s less of a shift happening in the hues.

So, while Oklch/LCH and Oklab/LAB all use the same general color space, the Cartesian coordinates are the key difference. And I agree with Sitnik and Turner, who make the case that Oklch and LCH are easier to understand than LAB and Oklab. I wouldn’t be able to tell you the difference between LAB’s a and b values on the Cartesian coordinate system. But chroma and hue in LCH and Oklch? Sure! That’s as easy to understand as HSL but better!

The reason I love Oklch over Oklab is that lightness, chroma, and hue are much more intuitive to me than lightness and a pair of Cartesian coordinates.

And the reason I like Oklch better than HSL is because it produces more consistent results over a wider color gamut.

OKLCH And CSS

This is why you’re here, right? What’s so cool about all this is that we can start using Oklch in CSS today — there’s no need to wait around.

“Browser support?” you ask. We’re well covered, friends!

In fact, Firefox 113 shipped support for Oklch a mere ten days before I started writing the first draft of this article. It’s oven fresh!

Using oklch() is a whole lot easier to explain now that we have all the context around color spaces and gamuts and how the new CSS Color Module Level 4 color functions fit into the picture.

I think the most difficult thing for me is working with different ranges of values. For example, hsl() is easy for me to remember because the hue is measured in degrees, and both saturation and lightness use the same 0% to 100% range.

oklch() is different, and that’s by design to not only access the wider gamut but also produce perceptively consistent results even as values change. So, while we get what I’m convinced is a way better tool for specifying color in CSS, there is a bit of a learning curve to remembering the chroma value because it’s what separates OKLCH from HSL.

The oklch() Values

Here they are:

l: This controls the lightness of the color, and it’s measured in a range of 0% to 100% just like HSL.
c: This is the chroma value, measured in decimals between 0 and 0.37.
h: This is the same ol’ hue we have in HSL, measured in the same range of 0deg to 360deg.

Again, it’s chroma that is the biggest learning curve for me. Yes, I had to look it up because I kept seeing it used somewhat synonymously with saturation.

Chroma and saturation are indeed different. And there are way better definitions of them out there than what I can provide. For example, I like how Cameron Chapman explains it:

“Chroma refers to the purity of a color. A hue with high chroma has no black, white, or gray added to it. Conversely, adding white, black, or gray reduces its chroma. It’s similar to saturation but not quite the same. Chroma can be thought of as the brightness of a color in comparison to white.”

— Cameron Chapman

I mentioned that chroma has an upper limit of 0.37. But it’s actually more nuanced than that, as Sitnik and Turner explain:

“[Chroma] goes from 0 (gray) to infinity. In practice, there is actually a limit, but it depends on a screen’s color gamut (P3 colors will have bigger values than sRGB), and each hue has a different maximum chroma. For both P3 and sRGB, the value will always be below 0.37.”

— Andrey Sitnik and Travis Turner

I’m so glad there are smart people out there to help sort this stuff out.

The oklch() Syntax

The formal syntax? Here it is, straight from the spec:

oklab() = oklab( [ <percentage> | <number> | none]
[ <percentage> | <number> | none]
[ <percentage> | <number> | none]
[ / [<alpha-value> | none] ]? )

Maybe we can “dumb” it down a bit:

oklch( [ lightness ] [ chroma ] [ hue ] )

And those values, again, are measured in different units:

oklch( [ lightness = <percentage> ] [ chroma <number> ] [ hue <degrees> ] )

Those units have min and max limits:

oklch( [ lightness = <percentage (0%-100%)> ] [ chroma <number> (0-0.37) ] [ hue <degrees> (0deg-360deg) ] )

An example might be the following:

color: oklch(70.9% 0.195 47.025);

Did you notice that there are no commas between values? Or that there is no unit on the hue? That’s thanks to the updated syntax defined in the CSS Color Module Level 4 spec. It also applies to functions in the sRGB gamut:

/* Old Syntax */
hsl(26.06deg, 99%, 51%)

/* New Syntax */
hsl(26.06 99% 51%)

Something else that’s new? There’s no need for a separate function to set alpha transparency! Instead, we can indicate that with a / before the alpha value:

/* Old Syntax */
hsla(26.06deg, 99%, 51%, .75)

/* New Syntax */
hsl(26.06 99% 51% / .75)

That’s why there is no oklcha() function — the new syntax allows oklch() to handle transparency on its own, like a grown-up.

Providing A Fallback

Yeah, it’s probably worth providing a fallback value for oklch() even if it does enjoy great browser support. Maybe you have to support a legacy browser like IE, or perhaps the user’s monitor or screen simply doesn’t support colors in the Display P3 gamut.

Providing a fallback doesn’t have to be hard:

color: hsl(26.06 99% 51%);
color: oklch(70.9% 0.195 47.025);

There are “smarter” ways to provide a fallback, like, say, using @supports:

.some-class {
color: hsl(26.06 99% 51%);
}

@supports (oklch(100% 0 0)) {
.some-class {
color: oklch(70.9% 0.195 47.025);
}
}

Or detecting Display P3 support on the @media side of things:

.some-class {
color: hsl(26.06 99% 51%);
}

@media (color-gamut: p3) {
.some-class {
color: oklch(70.9% 0.195 47.025);
}
}

Those all seem overly verbose compared to letting the cascade do the work. Maybe there’s a good reason for using media queries that I’m overlooking.

There’s A Polyfill

Of course, there’s one! There are two, in fact, that I am aware of: postcss-oklab-function and color.js. The PostCSS plugin will preprocess support for you when compiling to CSS. Alternatively, color.js will convert it on the client side.

That’s Oklch 🥰

O, Oklch! How much do I love thee? Let me count the ways:

You support a wider gamut of colors that make my designs pop.
Your space transitions between colors smoothly, like soft butter.
You are as easy to understand as my former love, HSL.
You are well-supported by all the major browsers.
You provide fallbacks for handling legacy browsers that will never have the pleasure of knowing you.

I know, I know. Get a room, right?!

Resources

CSS Color Module Level 4, W3C
W3C Workshop on Wide Color Gamut and High Dynamic Range for the Web, Chris Lilley (W3C)
“OKLCH in CSS: why we moved from RGB and HSL,” Andrey Sitnik and
Travis Turner
“Color Formats in CSS,” Joshua Comeau
“High Definition CSS Color Guide,” Adam Argyle
“LCH colors in CSS: what, why, and how?,” Lea Verou
“OK, OKLCH 👑,” Chris Coyier
“It’s Time to Learn oklch Color,” Keith J. Grant
“Color Theory For Designers, Part 2: Understanding Concepts And Color Terminology,” Cameron Chapman (Smashing Magazine)
HSL and HSV, Wikipedia