UX for emerging experiences

Original Source: http://feedproxy.google.com/~r/CreativeBloq/~3/gXjVXQFPctg/ux-for-emerging-experiences

As the web landscape changes so does user experience and to stay competitive you need to embrace the new. One thing that doesn't change is the user. If they have a poor user experience they will simply look elsewhere. So what are the emerging experiences that you need to consider today?

The theory of UX

These are the seven key themes that you should be designing for: inclusivity and accessibility, immersion, trust and transparency, coherence, conversation, collaboration and efficiency. Alongside these key themes we reveal the tools that you will need to ensure design success.

Design for inclusivity

Sometimes referred to as 'Universal Design', inclusive design considers as many people's needs and abilities as possible, instead of a 'one size fits all' approach to the experience.

As designers it can be easy to unwittingly design for those that are just like us, or prioritise these considerations due to tight budgets or deadlines. As designers we should be aiming to include people with varying ranges of cognitive or physical disability, rather than exclude them. Designers should do this by removing the barriers that create extra effort and separation, enabling the end user of your product or service to have the confidence to participate equally, and without support. 

tech for Good homepage

Tech For Good also has a podcast

Over the next year, expect to see inclusive and ethical design become an expected part of the UX Design process. Fortunately, there are plenty of other people getting involved in the digital community, with social movements such as The A11Y Project, AXSChat and Tech For Good gathering rapid momentum over the past 12 months. These groups provide a supportive space for designers to learn more about the inclusive design process and the problems that different people face when using technology. 

Inclusive design shouldn’t be confused with accessible design.

Inclusive design shouldn't be confused with accessible design. Products and services are usually made accessible as an afterthought; for example, a watch might be retrospectively made accessible for blind people by including braille numbering on top of the watch face. This modification to a device designed for those with sight may solve one technical problem, but introduce many more issues for those that are blind. Inclusive design seeks to fundamentally redesign a product from scratch, removing barriers from the start. Inclusive design is proactive, not reactive.

When starting any new project, one of the most important questions UX designers should ask themselves at all stages of the design process is, 'Who will this design exclude?'

Top tools
Funkify Disability Simulator
Funkify is an extension for Chrome that helps you experience the web and interfaces through the eyes of users with different abilities and disabilities. Funkify is created by a team of usability and accessibility experts in Sweden. Stark
The colour-blind simulator and contrast checker for Sketch. Simulate the various forms of colour-blindness by quickly previewing your Sketch designs and make adjustments as needed.Contrast
A macOS app that provides access to WCAG colour contrast ratios. The entire app UI updates instantly when picking colours, making it easy to get the colour contrast information you need to make informed decisions about the colour of your text.
Design for immersion

Traditionally, UX designers had a clear separation of realities to design for: real life, and the experience delivered on screen by the person's device. Now the lines have been well and truly blurred with Augmented Reality (AR) and Virtual Reality (VR) entering into mainstream use. It's not enough to design for screens, pages and offline touchpoints anymore, now the concept of multiple dimensions opens up a plethora of ways to enhance the experience. 

A whole host of interactions can be incorporated into designs, such as picking up, pinching, pushing and pulling, facial expressions, and even air tapping for Microsoft's HoloLens. To get them to do this, you must also think about the cues you will give users that are used to interacting with flat screens, how will you encourage them to look around in the space? 

microsoft hololens

Microsoft HoloLens brings holograms into the real world

With new immersive technologies you can now use audio to grab attention, or display elements just off screen to prompt them to move left and right. This new technology also gives you the opportunity to play around with objects in a 3D space, so it's important that designers become comfortable in how shadow and light can be used to create the illusion of depth and mass for objects in the interface.

Designers also need to be conscious of the right context to use these interactions. As a user interacting with an Augmented Reality app whilst driving would be entirely inappropriate, and it might be that a voice interaction is more suitable in this type of scenario. Thorough research and testing is required of the UX practitioner to find and understand these contexts and user goals.

Overall, expect the prevalence of AR and VR to increase rapidly over the next few years as businesses and organisations find ways for this technology to fit their business models.

Top tools
A-Frame
A-Frame is a web framework for building VR experiences. Originally developed by Mozilla, it is an independent open source project. A-Frame is HTML, making it simple to get started.Microsoft HoloLens
Microsoft's 'mixed-reality' product, HoloLens, is the first self-contained, holographic computer, enabling you to engage with your digital content and interact with holograms in the world around you.
Design for collaboration

As UX Designer and its many permutations become more ubiquitous as a role, teams are growing and they have a bigger seat at the table. As a result, more business stakeholders are interested in knowing – or even being involved – in what you're doing. 

The UX role has now matured, and there are plenty of online communities, tools, conferences, and books aimed specifically for the UX designer. To complete the perfect storm, the digital marketplace is also saturated with multiple offerings for a single type of product, and organisations are more willing to invest the time in creating unique user experiences to make them stand out in a fiercely competitive crowd. Suddenly, UX practitioners find they not only have a voice, but are influential in navigating a product or service to market. 

Realtime Board

Use RealtimeBoard to design journey maps, personas and other planning canvases

Superior soft skills are the secret weapon behind superior UX teams. This includes communication, listening, empathy, workshop facilitation, teamwork and storytelling. These provide the foundation that all other deliverables are based on. How do you know if the prototype you are testing will meet a user need if you have not listened properly in the research phase of the project? 

These skills are not an innate talent and need to be practised just like any other skill. Not only that, but developing your soft skills as a team enables you all to communicate properly with one another, forming a common strategy so you can all aim for the same goal. There are many good UX practitioners, but great ones have exceptional soft skills to help them do their job.

Collaborating with the customer or client is also essential to a smooth-running project. There are many tools such as Marvel, InVision and Axure that will enable you to quickly prototype up your work to show them the 'Promised Land', instead of sending emails back and forth you can now make your solutions come alive. The benefit of this approach is increased buy-in from clients and customers, and frictionless collaboration.

Some of the biggest obstacles to collaboration come from people not understanding what UX activities entail.

Some of the biggest obstacles to collaboration on a project can come from other business stakeholders and departments not understanding what UX activities entail. The solution here is to be as transparent and as open as possible. As a team you can pique people's interest by creating exciting areas of wall space in high traffic areas where deliverables such as personas, journey maps and wireframes can be displayed to spark conversations between different people within the business.

Even the rise of remote working and distributed teams is a waning threat to any UX team. There's a tool for every stage of the process, and you don't even need to be in the same room as each other. Project planning and management can be organised through tools such as Slack or Flock or Asana. Visual deliverables can be taken care of using collaborative whiteboards such as RealtimeBoard. Teams can work simultaneously to create fully fledged prototypes using one of the new generation of tools like Figma or InVision.

Top tools
Figma
Figma is a browser-based design tool that makes it easier for teams to create software. Present and prototype in the same tool as you design. Version control your team designs.Float
Float enables you to visualise your team's project assignments, overtime and time off in one place. Collaborate on project plans and resolve conflicts with real-time drag-and-drop scheduling.Loop11
Loop11 is integrated with JustInMind.com and is used to create prototypes that can then be used to run online usability tests, with the results shown in detailed reports and in-test videos. 
Design for trust

Trust is a human emotion that can be designed for, and can make or break the user's experience, but why is it so hard? Well, there's a lot out there to put off even the most savvy digital user, with dark UX patterns, fake news and clickbait rife. Emerging technologies such as blockchain and self-driving vehicles will put the majority of UX designers' skills to the test. 

In recent years trust has shifted from being controlled from the top-down by the business or organisation, to being collectively controlled by users via social media about how trustworthy (or untrustworthy) their experiences with a brand have been. It's fair to say that companies are not in control of this aspect of how they are viewed anymore, and so it's imperative that a brand's actions speak louder than its words. 

To gain the trust of the user, the experience must become as transparent as possible, with businesses being open about their motives, beliefs and activities. Designers can enable that relationship by not hiding away this information from the user, removing any anxieties they may have.

user testing. com

UserTesting.com is a great online tool for unmoderated testing

When a customer takes a leap of faith and invests their time, and possibly their money in your product or service, you suddenly have a social responsibility to make good on that relationship. So despite all that, how can trust be designed for? Thankfully there are a few techniques UX designers can use to instil confidence in the end user throughout their journey.

We all judge a book by its cover, and it's also well known that a user is more likely to trust a site that is more aesthetically pleasing. This is called the aesthetic-usability effect, and is described as us perceiving beautiful things as easier to use over ugly ones (even if that is not the case). Included in the look and feel of the site aesthetic should be the tone of voice and type of imagery that are used to convey a professional, reliable impression of the business or organisation. 

Of course, the ultimate indicator of trust should always be in the user testing results, along with observations of the user's reactions to sites. Subjective measures like trust can also be captured at the end of tests. Moderated user testing will always provide much greater insights, but there are tools online to run unmoderated tests such as UserTesting.com.

Top tools
Dark Patterns
Dark Patterns are tricks used in websites and apps that make you buy or sign up for things that you didn't mean to. The purpose of the Dark Pattern Library site is to spread awareness and to document the companies that use such techniques.Government Research Consent Guidelines
The UK government website contains an entire manual on service design and the consent forms you need signed to ensure you can be trusted with a person's data gathered during user research.
Design for coherence

With more and more touchpoints emerging, organisations are in danger of their user's journey becoming so heavily fragmented that it could become an incoherent mess. To add to the omnichannel experience there are now chatbots and other voice interfaces to consider in the user's journey, so the experiences and conversations people have with them need carefully designing.

Planning is key, taking a 'helicopter view' of the entire user's journey with the business. This should include doing as much user research as possible to make sure the touchpoints you design align with their goals, and what they're doing in real life. Turning this research into user journey maps and personas will help guide designers on which touchpoints should be used for different audiences. Many tools exist for supporting these activities; Smaply caters for all of the above, and Xtensio can be used to create simple personas and diagrams, but there are also more traditional offline tools such as Axure that you can use to get the job done. 

Mockflow interface

With MockFlow you can plan and create better user interfaces

It's also important to consider which touchpoints shouldn't be designed for, especially if it is discovered during the research that it would be inappropriate to use certain methods to contact certain audiences. For example, on a digital experience dealing with a homeless person registering for support services, would it be appropriate to ask for an address? 

Designing a coherent experience means not just designing for screens and apps anymore, but every means of contact the customer has with that organisation, so that a unified message can be delivered, regardless of the type of touchpoint. It's imperative that this key message is decided on from the start. The entire UX team should know from research what message to deliver. 

It's a common belief that the more material you present to the user, the greater chance that some of it will be remembered. It's the old adage of throwing a load of mud in the hope some will stick, but this isn't true. Your audience will end up confused about the message you are trying to deliver.

Top tools
Axure
Create simple click-through diagrams or highly functional, rich prototypes with conditional logic, dynamic content, animations, math functions, and data-driven interactions. Use Axure Share to upload content to share with your team.Asana
Asana is an online project management tool, designed to help teams track their work. Asana gives you everything you need to stay in sync, hit deadlines, and reach your goals.MockFlow
MockFlow provides a full solution for design teams, which includes wireframing, sitemaps, UI spec systems, design workflow and more. Enables you to plan and create better user interfaces together within a single suite.Storyboard That
Storyboards are a fun and engaging way to relay research findings and user journeys to stakeholders. Use the extensive image library and flexible templates to create storyboards of this information.Smaply
This website has an online editor which enables you to integrate basic service design tools into your daily work, such as user journey maps, stakeholder maps and personas. Your designs can be downloaded as PDFs and image files.
Design for efficiency

Kaktus interface

With Kaktus you can implement version control without having to learn a new set of tools

As UX teams grow, there are smarter ways of managing the multitude of design assets created by a team. No more naming your work 'homepage_wireframe_finalFinal14.pdf', or taking it in turns to work on the same document in your team. Thankfully now there are tools aimed specifically at design teams to version control design work. The majority are based on Git, the same technology used by developers to manage their application code.

There are so many advantages to using this sort of software to manage your designs. Not only can multiple designers work on the same project at the same time, but you can roll back to a previous version if needed.  Although you will only see the current version of a file, a full version history is kept and reviewing the changes made between versions of a file are even possible. These features of version control mean problems like losing work when a file is accidentally overwritten, or two people decide to make changes to the same thing are now a thing of the past.

Once changes are made, many tools let you communicate those changes to the team. This is a step forward in terms of productivity and efficiency, enabling projects to be completed as quickly as possible. Lots of the larger web-based design tools like Figma and UXPin provide this as part of the subscription, but there are standalone tools like Kaktus, Abstract and Folio for Mac.

Design for conversation

The rise of chatbots and other conversational devices such as Amazon Alexa and Google Home has been all pervasive over the past few years, and many companies are still trying to work out where this new technology can be inserted into their strategy with customers. But where does traditional experience design fit in, especially when there will be no physical interface to design? This is a new frontier for service design, with endless possibilities for designing intuitive and human-centred experiences that people love.

Conversations between human beings are intricate, complex and heavily nuanced. Not to mention the cultural and semantic differences that are commonly observed in humans across the world. How do you anticipate and plan for the vast array of possible questions and reactions a human being might have? Designers will need to spend time designing all the possible flows and outcomes these conversations might take. And the more human the experience can be the better, but how can you make a machine appear human? How do you build a relationship with a machine?  These are questions the UX designer must consider to create an effective outcome for the end user.

man with laptop

Conversational interfaces bring a whole new set of challenges with them

Understanding the context that your designs will be used in is also important, so rigorous and in-depth research is essential. Would your target audience use a voice interface walking down the street? Would it be usable if it was a noisy street? All this can be answered by spending time understanding your users and capturing what their goals are.

Another essential part of the UX practitioner's role will be in planning for and testing these conversational interfaces. This will be very different to traditional testing of apps and sites, and will require much more rigorous planning of scripts and testing sessions. 

There are a few tools for designing the proposed chatbot conversations and also the UI, such as BotPreview and Botsociety, which then enable you to go and test these conversations out on real people before you release your chatbot or conversational UI. As a result of this frenzied focus given to this emerging technology, expect to see new roles created as offshoots of the standard UX Designer and – relatively new – UX Writer titles, such as 'Conversational Designer' (catering for research, testing, behaviours and personality of the interface) and 'Conversational Strategist' (a niche role dedicated to designing the flows and logic of the conversations).

Top tools
BotPreview
Sketch and design your own chatbot interactions using the BotPreview online editor and share them or export as static HTML or MP4/GIF video, without writing a single line of code.Botsociety
Design voice and chat interfaces using the online web editor by quickly building a high-fidelity preview of your next chatbot or voice assistant. Botsociety takes care of the appearance, the platform limitations, the preview, the export and the user testing for you.Botmock
Botmock uses a drag-and-drop editor with templates to build prototypes of conversational design. Map out the customer's journey, and create a live preview that can be exported to GIF and video.Bots UI Kit for Sketch
A simple and fully customisable Sketch UI kit to help you design and showcase your Facebook Messenger Bots. All elements are turned into new branded Sketch symbols, so prototyping has never been easier.Walkie
This tool is especially for Slack users to help design slack bot dialogues. It provides an easy way to write and test bot dialogues, which include buttons and also attachments.

This article was originally published in issue 274 of creative web design magazine Web Designer. Buy issue 274 here or subscribe to Web Designer here.

Related articles:

New skills in UX designWhat are the main barriers to good UX today?Why graphic designers need to master UX

Help design the new Firefox logo

Original Source: http://feedproxy.google.com/~r/CreativeBloq/~3/KDnzxsGheWE/help-design-the-new-firefox-logo

When it comes to Firefox, most people think of the colour critter-cum-web browser that brightens up their desktop. For Firefox though, this isn't quite enough. According to Tim Murray, the creative director at the company's nonprofit owner, Mozilla, there's more to the web browser than is reflected in the current logo design (see below).

Old Firefox logo

The previous Firefox logo has been in use since 2017

To help correct this injustice, Firefox revealed in a blog post earlier this week that it wants the public to get involved with helping to evolve its brand. It follows in the footsteps of Mozilla, which open-sourced the process of selecting its new design and brand identity in 2017.

The decision to move the Firefox brand forwards through a redesign comes as users find new ways to use internet, with methods that are not truly reflected in the flaming fox design.

"As an icon, that fast fox with a flaming tail doesn’t offer enough design tools to represent this entire product family," says Murray. "Recoloring that logo or dissecting the fox could only take us so far. We needed to start from a new place."

To create a brand system that truly communicates what Firefox is all about, a team of product and brand designers at Mozilla have reworked its design system. The two systems can be explored below in the gallery by clicking left to right with the arrows.

What do you think? If we're being honest, and slightly contrarian, we like the fox icons in system 2, but prefer the geometric icon designs in system 1. Go figure.

Crucially though, this isn't a decision by public vote. Firefox is looking for constructive feedback to be left in the comments section of the blog post. And it's important to keep in mind that these icon designs aren't final. You could help shape their look!

So if you've got something useful to say, head over to the blog, leave a comment, and help shape a piece of sure to be ubiquitous design.

Related articles:

11 places to find logo design inspirationQuiz: guess the logo, can you identify these brands?How to price logo design services

The future of design: AR will be bigger than the internet

Original Source: http://feedproxy.google.com/~r/CreativeBloq/~3/gZMcIB_Bf74/the-future-of-design-ar-will-be-bigger-than-the-internet

Soon, a new era of experimental design and design thinking will be upon us. We’ll have entirely augmented experiences everywhere we walk, and voice design is the next big horizon for creatives. 

They’re just two predictions into the future of design shared by Scott Belsky, co-founder of Behance, and Adobe's chief product officer and executive vice president of Creative Cloud. 

Belsky took to the stage in London at an exclusive Adobe event earlier this summer to talk through the challenges and opportunities presented by emerging technologies – and to forecast the future for designers.

scott belsky at the gherkin in London

Scott Belsky at Adobe’s Future of Design event in London

As the future becomes increasingly commoditised, he said, creativity – and the role of user experience designers, particularly – will become increasingly important. 

"Companies are putting designers at the head of the table," he explained. "The user’s experience of technology these days is even more important than the tech itself. The UI is what distinguishes a product; a company. That’s one reason why designers are being employed across industries.”

Get 15% off Adobe Creative Cloud with our exclusive offer

In fact, when Adobe spoke to hiring managers at a range of top companies, 87 per cent of them said that UX designers are some of their most critical hires right now. 

So aside from a bright future for UX designers, what else is next for design? Here are five predictions Belsky made at the event – followed by an exclusive conversation with Creative Bloq, in which he explores the biggest new challenges and opportunities designers should prepare for.

Jump straight to the Scott Belsky interview
01. Augmented reality

We’ll soon have entirely augmented experiences everywhere we walk. AR will be as critical as the web,” Belsky predicted, adding that this is why Adobe has developed Project Aero, a powerful new augmented reality tool that makes it easier for designers and developers to create immersive content, and bridge the gap between the physical and digital worlds. More on that below.

02. Voice design

“It’s the simplest interface of all, so we need to be able to design for it,” he said. Voice design tools are being brought into Adobe XD because we’re moving into a voice-driven world (think: Amazon Echo and Google Home) – and it’s raising many questions for designers, not least ethical ones.  

03. Artificial intelligence

Labour will become increasingly automated, with AI and machine learning helping creatives work smarter and faster by taking on repetitive tasks. “AI is a vertical of creativity,” said Belsky. “Think of it as a creative assistant.” 

04. Connected creativity

New tools like Adobe Capture – which turns photos on your phone or tablet into creative assets – will continue to deliver on the creative freedom promised by Creative Cloud in increasingly unique ways. “There’s an idea that in some ways we’re still chained to desktop – we expect to do our professional work there,” he said. “But that’s not where creativity happens.”

05. Ethics in design

What are our responsibilities for the end customer experience? What is the responsibility of the designer in preserving a consumer choice? When using visual search, such as Google, you're presented with a lot of options. Using a voice interface, this might not be the case – so who chooses which option you get, and how can you ensure the consumer’s best interests are served? Ethical questions have always been important, but in this new age of design they're even more so.

New challenges and opportunities for designers

ipad with image of creature on it

Project Aero: immersive media is poised to become the next disruptive platform. Welcome to the first wave of mainstream AR

So will AR really be bigger than the web? What sorts of questions is voice design raising? And what skills will designers need to meet the future of design head-on? We caught up with Belsky after the event to find out more…

What are the biggest opportunities of AR for designers?

Scott Belsky: I believe AR will do almost everything the web does for us, but in the context of our physical world, rather than on a screen. It will change the way we do everything from finding our way around cities, to reviewing the menu in restaurants, to dating, to fixing appliances in our homes. 

AR will do almost everything the web does for us, but in the context of our physical world, rather than on a screen. It will change the way we do everything.

Scott Belsky

Augmented Reality will enrich these experiences in ways we can barely imagine. However, none of this is possible without designers creating compelling three-dimensional interactive content and being able to collaborate with developers across platforms. 

AR and voice have the greatest potential to disrupt the way we experience the world. Every business group across Adobe is thinking about and building for AR because we strongly believe that it’s a transformative medium. AR is at the intersection of our physical and digital worlds, and requires a fundamentally different paradigm for interaction and design beyond the traditional screen experience. Designers will have the opportunity to literally design a new reality, and that’s going to be fun and challenging. 

How soon will AR be everywhere?

SB: We’re at the beginning of a journey with augmented reality. We believe that Project Aero is breaking new ground, with the goal of simplifying the development of AR content, delivering an even more powerful medium for storytelling for artists and designers around the world. Through our collaboration with Apple, Pixar and other partners, Project Aero will give creative professionals the ability to create more authentic experiences. 

What’s compelling is the quality and depth of the imagery, which makes the experience real and even more vivid. The industry is evolving at a rapid pace and there will be commercial and consumer demand for these types of experiences.

We see the potential of AR experiences to enable new forms of creative expression, spawn new customer experiences, and ignite new business models that we can’t even imagine today. We envision immersive media ultimately becoming ubiquitous in everyday life.  We’ll have a new interface through which we interact with a range of retail, news, search and other common applications.

What are the biggest challenges of AR for designers? How will Project Aero help?

SB: Most designers I speak with are excited about AR, but have no idea where to get started designing immersive experiences and how to work with developers to make them a reality. 

Our challenge is to help designers work with the tools they know and love, like Photoshop or Adobe XD for screen design, and then import their work to new tools like Adobe Dimension to make their creations 3D. And then, with Project Aero, designers will be able to make their creations interactive and easily 'published' to locations in augmented reality.  

For the first time, designers will be able to lay out and manipulate designs in physical spaces with a ‘what you see is what you get’ tool, making AR creation more fluid and intuitive. What’s more, delivering these immersive experiences to audiences on mobile devices will become faster, easier and safer. 

How can designers get ahead in voice design?

SB: Design is becoming more immersive and voice has become more important. Increasing numbers of people use a voice interface to order dinner, choose music, set reminders, and so many other tasks, thanks in large part to consumer products like Amazon’s Alexa and Google Assistant. 

Smart speakers will be installed in more than 70 million U.S. households by 2022, according to a Juniper Research report, and consumers have high expectations of voice technology because they’re used to naturally interacting and talking to people. For designers, creating voice user interface (VUI) experiences requires new skills that transcend the keyboard, mouse and screen.

For designers to be successful in the future, they’ll need to know how to create a voice interface that is efficient and intuitive.

Scott Belsky

For designers to be successful in the future, they’ll need to know how to create a voice interface that is efficient and intuitive. Our goal is to help designers succeed in this medium and in the broader world of immersive and interaction design. That’s one of the reasons we’ve invested so heavily in Adobe XD as an experience design platform that can adapt to new modalities over time.  

Adobe XD brings prototyping and design together, which has unlocked new capabilities including allowing designers to easily switch from wireframes to prototypes and use tools such as After Effects to add deeper animations to their UX/UI designs. Unfortunately, I can’t share more now, but you’ll see a massive amount of innovation from us as it relates to XD in the coming months. 

What are the biggest hurdles posed by voice design? 

SB: As I mentioned, there has been a tremendous growth in voice-enabled devices. For designers, creating VUI experiences requires new skills since you cannot simply apply the same design guidelines to VUI, as you would a graphical app or web experience. Designers must have a deep understanding of human communication and natural conversation flow to design for VUIs.

Additionally, it requires a mindset shift to design for this medium. VUIs need to contain the right amount of information to meet users’ expectations and provide users with information on what they can do with the technology. For example, proactive prompting along the lines of, 'What can I help you with today?' might help a user get started. Without visual guidance, it’s easy for the user to get lost.

There are, of course, ethical considerations when it comes to VUI design too. For example, designers will need to carefully consider how often the technology is listening or recording, and clearly spell that out for the user. Companies and their designers will need to ensure privacy is baked into the product from the start. 

Another important issue in voice is the default settings. When you ask your voice assistant to order flowers, what service does it default to using? Making tasks easy is great for consumers, but the design will have to make it transparent how those tasks are happening and give users the option of changing the defaults so they can personalise the experience.

Related articles:

How to future-proof yourself as a designer10 huge graphic design trends to know for 2018The ultimate guide to design trends

User Experience Psychology And Performance: SmashingConf Videos

Original Source: https://www.smashingmagazine.com/2018/08/smashingconf-ux-videos/

User Experience Psychology And Performance: SmashingConf Videos

User Experience Psychology And Performance: SmashingConf Videos

The Smashing Editorial

2018-08-01T13:30:35+02:00
2018-08-01T15:01:09+00:00

Today, we’d like to shine a light on two videos from our archives as we explore two very different approaches to User Experience (UX). The first explores how we relate our websites to the needs and situations of our visitors, trying to meet them where they are emotionally. The second is a detailed technical exploration into how we measure and track the data around performance as it relates to user experience.

The second video may seem unrelated to the first video; however, while the collecting and analyzing of data might seem very impersonal, the improvements we can make based on the information makes a real difference to the experience of the people we build our sites to serve.

Designing Powerful User Experiences With Psychology

Recorded at the SmashingConf in San Francisco earlier this year, Joe Leech explains how psychology impacts user experience. Joe explains the frustrations people using our products face, and the things happening in their everyday lives and environment that can make interacting with our websites and applications difficult. He goes on to help us understand how we can design in a way to help these visitors rather than frustrate them.

How’s The UX On The Web, Really?

Once you have created a great user experience, how do you know that it is really working well? Especially in terms of site performance, we can track how people are using our sites and examine that data to see what is really happening.

At the SmashingConf in London, Ilya Grigorik was the Mystery Speaker and spoke about the ways to assess performance in real terms, and benchmark your application against other destinations on the web.

Enjoyed listening to these talks? There are many more SmashingConf videos on Vimeo. We’re also getting ready for the upcoming SmashingConf in New York — see you there? 😉

With so much happening on the web, what should we really pay attention to? At SmashingConf New York 2018 ?? we’ll explore everything from PWAs, font loading best practices, web performance and eCommerce UX optimization, to refactoring CSS, design workflows and convincing your clients. With Sarah Drasner, Dan Mall, Sara Soueidan, Jason Grigsby, and many other speakers. Oct 23–24.

Check the speakers →

SmashingConf New York 2018, with Dan Mall, Sara Soueidan, Sarah Drasner and many others.

Smashing Editorial
(ra, il)

How to use Media Queries in JavaScript with matchMedia

Original Source: https://www.sitepoint.com/javascript-media-queries/

When it was first introduced, responsive design was one of the most exciting web layout concepts since CSS replaced tables. The underlying technology uses media queries to determine the viewing device type, width, height, orientation, resolution, aspect ratio, and color depth to serve different stylesheets.

If you thought responsive design was reserved for CSS layouts only, you’ll be pleased to hear media queries can also be used in JavaScript, as this article will explain.

Media Queries in CSS

In the following example, cssbasic.css is served to all devices. But, if it’s a screen with a horizontal width of 500 pixels or greater, csswide.css is also sent:

[code language=”html”]
<link rel="stylesheet" media="all" href="cssbasic.css" />
<link rel="stylesheet" media="(min-width: 500px)" href="csswide.css" />
[/code]

The possibilities are endless and the technique has long been exploited by most websites out there on the Internet. Resizing the width of your browser triggers changes in the layout of the webpage.

With media queries nowadays it’s easy to adapt the design or resize elements in CSS. But what if you need to change the content or functionality? For example, on smaller screens you might want to use a shorter headline, fewer JavaScript libraries, or modify the actions of a widget.

It’s possible to analyze the viewport size in JavaScript but it’s a little messy:

Most browsers support window.innerWidth and window.innerHeight. (IE before version 10 in quirks mode required document.body.clientWidth and document.body.clientHeight.)
window.onresize
All the main browsers support document.documentElement.clientWidth and document.documentElement.clientHeight but it’s inconsistent. Either the window or document dimensions will be returned depending on the browser and mode.

Even if you successfully detect viewport dimension changes, you must calculate factors such as orientation and aspect ratios yourself. There’s no guarantee it’ll match your browser’s assumptions when it applies media query rules in CSS.

The post How to use Media Queries in JavaScript with matchMedia appeared first on SitePoint.

The best colour tools for web designers

Original Source: http://feedproxy.google.com/~r/CreativeBloq/~3/XoqHAngN_VI/the-best-colour-tools-for-web-designers

As web designers, one of the most important choices we make has to do with our colour selections. Choose the wrong ones, and you might just lose out on an opportunity. It's true – the colours we choose can have a psychological impact on those who view them.

For example, red is generally viewed as a high-energy colour, while blue implies calmness and peace. To illustrate this point, consider the colours you might use on a website selling children's toys versus a site for a law firm. Chances are, you'll go with bright, vibrant colours for the former, and muted tones of blue and grey for the latter.

But how do you know which colours work well together? Luckily, you don't have to be a master at colour theory to put together a workable colour palette. To help you with the important task of colour selection, here are some of the best free colour web design tools (plus one special bonus at the end for Mac users).

01. HueSnap

hue snap colour tool

Snap inspiration on the go and turn it into colour palettes

Inspiration can strike at any time. It might be the decor of a hotel room or the light in the park one evening that sparks the inspiration for your next website colour scheme. For when that happens, HueSnap is here to help. You can snap a photo and use HueSnap to extract the colours from the image and make them into a palette. 

The app is tailored for mobile use, and you can save and share your palettes with others. There are plenty of features to help you modify a palette, such as options to choose complementary and compound colours, and your palettes can have up to six colours each.

02. Khroma

khroma colour tools

Khroma uses AI to suggest colours you’ll like

Khroma is an AI colour tool that aims to help you easily browse and compare original colour combinations. With it, users train an AI algorithm to act like an extension of their brain. Users start by picking 50 colours they like, and these colours are used to train a neural network that can recognise hundreds of thousands of other similar colours. Find out more about Khroma and how to use it here.

03. Coolors.co

Laptop, desktop and mobile screens displaying colour palettes

The Explore section includes hundreds – if not thousands – of palette options

Coolors offers a wide variety of tools for adjusting the palette just the way you want it. In addition, you can export your final creation in many different formats so you can use it virtually wherever you want. 

Coolors isn’t just a tool to create a colour palette, it also allows you to view other completed creations from other users so that you can draw inspiration. The Explore section has hundreds (if not thousands) of palettes you can view, save, and edit yourself. Even better, Coolors is available on desktop computers, and as an iOS application, an Adobe Photoshop and Illustrator add-on – and even a Google Chrome Extension for easy access.

04. Adobe Color CC

Colour wheel selection screen with adjustment tools

This has been around a while, but is still incredibly useful

Free tool Adobe Color CC has been around for a while, and it's one of the best colour tools out there for picking a colour palette. Not only can you create your own colour schemes, but you can also explore what others have created. 

Select a colour from the wheel or from an image and apply colour rules such as only using complementary colours, monochromatic colours or shades of the colour you select, to generate a colour palette. Or, click on each colour and explore the colour wheel to customise the selection. As an added bonus, you can save the themes you create to your Adobe library.

05. Colordot

Bars of colours with reference numbers

Use simple mouse gestures to build up your colour palette

Colordot by Hailpixel is an excellent free online tool for creating a colour palette. Using simple mouse gestures, you can select and save colours. Move your mouse back and forth for hue; up and down for lightness; scroll for saturation and click to save a colour to your palette. Click the tog icon to see each colours RGB and HSL values. It also has a $0.99/£0.99 iOS app that allows you to capture colours with your camera.

06. Eggradients

eggradients screen shot

Gradient inspiration and thought-provoking names

Eggradients offers ideas for beautiful gradients to use within your design work, put together by someone with both a great eye for colour and an interesting sense of humour. Each gradient, displayed in an egg shape, comes with its own thought-provoking name. Examples include 'Wozniak’s Broken Heart' for a pale blue and 'Merciful Enemy' for a yellow to green transition. 

07. 147 Colors

Grid of multicoloured swatches

This free tool includes the standard CSS colours

When you're responsible for generating easy-to-read CSS, sometimes using standard colours and colour names is the way to go. Thanks to 147 Colors by Brian Maier Jr, you can get a glimpse of all of them, and pick the ones that work for you. 

It contains the 17 standard colours, plus 130 other CSS colour names. Filter the results by shades of blue, green and so on, or choose from the full rainbow of 147 colours.

08. Canva Color Palette Generator

Canvas tool colour selection screen

Create a colour palette based on an image

The Color Palette Generator by Canva is perfect if you're looking to create a colour palette based around a particular image.  Although other tools offer similar options, Canva’s is super-simple to use: you upload an image and the generator will return a palette of the five main colours contained in it. You can click on the colours you like and copy the HEX value to your clipboard.

Unfortunately, this is where the usefulness of Canva’s offering ends, as this is all you can do with its palette generator – you cannot adjust the colours of the palette. The only other options you have are to copy the hex values provided or upload another photo.

09. Material Design Palette

Material Design Palette selection screen

Create a palette based on Google’s Material Design principles

With Material Design Palette you can select two colours, which are then converted into a full colour palette for you to download, complete with a preview. 

The company also offers Material Design Colors, which enables designers to see the different shades of a colour, along with their corresponding HEX values.

10. ColourCode

Bars of colours with HEX values

Save and export colour palettes as SCSS, LESS or PNG files

ColourCode by Tamino Martinius and Andreas Storm is similar to Colordot, but it offers a bit more guidance. This free tool hits you right in the face, showcasing a background that changes colours with your cursor movement. Besides that, this tool offers different categories for the palette (analogue, triad, quad, monochrome, monochrome light etc). 

With ColourCode, you can set different options along the colour wheel to create an original combination. You can also save your palette or export it as a SCSS or LESS file. You can even export to PNG, if you'd like.

11. Color Calculator

Colour Calculator instruction screen

Select a colour and a colour harmony, and this tool will generate a colour palette

The Color Calculator is straightforward: you select a colour and a colour harmony option. In return, you get back the results of your recommended colour scheme. 

What's nice about this site, however, is that it also goes into a little bit of detail about colour theory and how it relates to your colour choices.

12. HTML Color Code

HTML Color Code download screen

This suite of tools includes a list of standard colour names

This bulging free suite of tools by Dixon & Moe includes an in-depth colour picker with plenty of explanations of colour rules; a series of colour charts featuring flat design colours, Google's Material design scheme and the classic web safe colour palette; and a list of standard HTML colour names and codes. 

This site also offers tutorials and other resources for web designers, and options to export results from its tools as HEX codes, HTML, CSS and SCSS styles.

13. W3Schools: Colors Tutorial

Colors Tutorial naming examples screen

This free tutorial includes links to a number of handy colour tools

If you're looking for an all-in-one solution that includes a guide to colours, as well as a number of different tools, then the Colors Tutorial at W3Schools is the perfect choice.

Not only can you learn about colour theory, colour wheels, and colour hues, but you'll also be able to use the other tools it has, such as the Color Converter. With this tool, you're able to convert any colour to-and-from names, HEX codes, RGB, HSL, HWB and CMYK values.

14. Digital Color Meter (Mac)

Example of Digital Color Meter in action

Mac’s built-in tool lets you grab colours from your screen

OK, Mac users… this one's for you. With your machine's built-in Digital Color Meter tool, you can 'grab' a colour from anywhere on your screen, then get the values for that colour as a decimal, hexadecimal, or percentage. Plus, you can even 'copy' the selected colour as a text or image.

Read more:

If celebrities were Pantone colours3 huge colour trends for 2018How to pick the perfect colour palette every time

Be Legendary. Nike Branding Concept for Tokyo 2020

Original Source: http://feedproxy.google.com/~r/abduzeedo/~3/4wQsSWfJdIg/be-legendary-nike-branding-concept-tokyo-2020

Be Legendary. Nike Branding Concept for Tokyo 2020

Be Legendary. Nike Branding Concept for Tokyo 2020

AoiroStudio
Aug 02, 2018

Daniele Caruso is a freelance illustrator based in Swindon, United Kingdom. He is working mainly in illustration, graphic design and branding. We are taking a look at his branding concept for Nike: Be Legendary, for the upcoming and anticipated Tokyo 2020. With the tagline “legendary”, Daniele included mythological creatures to create an artistic atmosphere alongside with the colour palette that totally reminds me of Dotonbori (the bright heart) from Osaka, Japan. What do you think? Would you like this kind of visual approach if it was from Nike.

More Links
danielecaruso.com
Behance
Be Legendary. Nike Branding Concept for Tokyo 2020Be Legendary. Nike Branding Concept for Tokyo 2020Be Legendary. Nike Branding Concept for Tokyo 2020Be Legendary. Nike Branding Concept for Tokyo 2020

nike
branding
concept
illustration
tokyo


Collective #438

Original Source: http://feedproxy.google.com/~r/tympanus/~3/zfuMSJF18o8/

C438_WOTW

Inspirational Website of the Week: Volt By Drive

A great game-like design with some nice animations. Our pick this week.

Get inspired

C438_NW

Our Sponsor
Earn your master’s in Information Design and Strategy

Learn to blend digital skills like information architecture & experience design in Northwestern’s online master’s program for designers.

Apply now

C438_city

Little Big City

A fantastic project by Yi Shen: generating a real city on a little planet with the help of ClayGL.

Check it out

C438_game

Pyxel

Pyxel is a retro game development environment in Python.

Check it out

C438_fusionjs

Introducing Fusion.js: A Plugin-based Universal Web Framework

Leo Horie from Uber Engineering introduces Fusion.js, an open source web framework for building lightweight, high-performing apps.

Read it

C438_costjs

The Cost Of JavaScript In 2018

Addy Osmani covers some strategies you can use to deliver JavaScript efficiently while still giving users a valuable experience.

Read it

C438_doodles

theDoodleLibrary

A fantastic collection of free, reusable drawings and doodles in a vector (SVG) format.

Check it out

C438_css

CSS exclusions with Queen Bey

Chen Hui Jing writes about CSS Exclusions and new CSS features in general,? and why we should keep them out regardless of current browsers’ support.

Read it

C438_task

Taskbook

Taskbook enables you to effectively manage your tasks and notes across multiple boards from within your terminal.

Check it out

C438_clipboard

The Clipboard API Crashcourse

A practical guide to the Clipboard API by David East.

Check it out

C438_bullshitweb

The Bullshit Web

A very interesting article by Nick Heer on the course the web took concerning unnecessary page load for questionable purposes.

Read it

C438_network

Dynamic resources using the Network Information API and service workers

Learn about the new Network Information API that allows developers to determine the connection types and the underlying connection technology that the user agent is using. By Dean Hume.

Read it

C438_zen

CodeZen

With this tool you can generate shareable and elegant images from your source code.

Check it out

C438_betweenjs

Between.js

A lightweight JavaScript (ES6) tweening library by Alexander Buzin.

Check it out

C438_apps

UI Sources

Get real product insights from the best designed and top grossing apps on the App Store with this email newsletter.

Check it out

C438_native

Performance Techniques in 2017

A slide deck with lots of info on getting native performance with new Web APIs.

Check it out

C438_vunits

The trick to viewport units on mobile

Louis Hoebregts shows an interesting trick to get viewport units behave on mobile.

Read it

C438_reportingobs

ReportingObserver: know your code health

Eric Bidelman writes about the ReportingObserver, a new API that lets you know when your site uses a deprecated API or runs into a browser intervention.

Read it

C438_motion

Improve your motion

An article by Erick Leopoldo with practical tips on how to make animations better.

Read it

C438_font

Free Font: Bivona

A playful, energetic font by Dathan Boardman from Rocket Type.

Get it

C438_loader

Space Loader

A great space themed loader by Chris Gannon.

Check it out

Collective #438 was written by Pedro Botelho and published on Codrops.

Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them

Original Source: https://www.smashingmagazine.com/2018/04/sirikit-intents-app-guide/

Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them

Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them

Lou Franco

2018-04-11T17:00:44+02:00
2018-04-11T15:22:34+00:00

Since iOS 5, Siri has helped iPhone users send messages, set reminders and look up restaurants with Apple’s apps. Starting in iOS 10, we have been able to use Siri in some of our own apps as well.

In order to use this functionality, your app must fit within Apple’s predefined Siri “domains and intents.” In this article, we’ll learn about what those are and see whether our apps can use them. We’ll take a simple app that is a to-do list manager and learn how to add Siri support. We’ll also go through the Apple developer website’s guidelines on configuration and Swift code for a new type of extension that was introduced with SiriKit: the Intents extension.

When you get to the coding part of this article, you will need Xcode (at least version 9.x), and it would be good if you are familiar with iOS development in Swift because we’re going to add Siri to a small working app. We’ll go through the steps of setting up a extension on Apple’s developer website and of adding the Siri extension code to the app.

“Hey Siri, Why Do I Need You?”

Sometimes I use my phone while on my couch, with both hands free, and I can give the screen my full attention. Maybe I’ll text my sister to plan our mom’s birthday or reply to a question in Trello. I can see the app. I can tap the screen. I can type.

But I might be walking around my town, listening to a podcast, when a text comes in on my watch. My phone is in my pocket, and I can’t easily answer while walking.

Getting the process just right ain’t an easy task. That’s why we’ve set up ‘this-is-how-I-work’-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features →

Smashing TV, with live sessions for professional designers and developers.

With Siri, I can hold down my headphone’s control button and say, “Text my sister that I’ll be there by two o’clock.” Siri is great when you are on the go and can’t give full attention to your phone or when the interaction is minor, but it requires several taps and a bunch of typing.

This is fine if I want to use Apple apps for these interactions. But some categories of apps, like messaging, have very popular alternatives. Other activities, such as booking a ride or reserving a table in a restaurant, are not even possible with Apple’s built-in apps but are perfect for Siri.

Apple’s Approach To Voice Assistants

To enable Siri in third-party apps, Apple had to decide on a mechanism to take the sound from the user’s voice and somehow get it to the app in a way that it could fulfill the request. To make this possible, Apple requires the user to mention the app’s name in the request, but they had several options of what to do with the rest of the request.

It could have sent a sound file to the app.
The benefit of this approach is that the app could try to handle literally any request the user might have for it. Amazon or Google might have liked this approach because they already have sophisticated voice-recognition services. But most apps would not be able to handle this very easily.
It could have turned the speech into text and sent that.
Because many apps don’t have sophisticated natural-language implementations, the user would usually have to stick to very particular phrases, and non-English support would be up to the app developer to implement.
It could have asked you to provide a list of phrases that you understand.
This mechanism is closer to what Amazon does with Alexa (in its “skills” framework), and it enables far more uses of Alexa than SiriKit can currently handle. In an Alexa skill, you provide phrases with placeholder variables that Alexa will fill in for you. For example, “Alexa, remind me at $TIME$ to $REMINDER$” — Alexa will run this phrase against what the user has said and tell you the values for TIME and REMINDER. As with the previous mechanism, the developer needs to do all of the translation, and there isn’t a lot of flexibility if the user says something slightly different.
It could define a list of requests with parameters and send the app a structured request.
This is actually what Apple does, and the benefit is that it can support a variety of languages, and it does all of the work to try to understand all of the ways a user might phrase a request. The big downside is that you can only implement handlers for requests that Apple defines. This is great if you have, for example, a messaging app, but if you have a music-streaming service or a podcast player, you have no way to use SiriKit right now.

Similarly, there are three ways for apps to talk back to the user: with sound, with text that gets converted, or by expressing the kind of thing you want to say and letting the system figure out the exact way to express it. The last solution (which is what Apple does) puts the burden of translation on Apple, but it gives you limited ways to use your own words to describe things.

The kinds of requests you can handle are defined in SiriKit’s domains and intents. An intent is a type of request that a user might make, like texting a contact or finding a photo. Each intent has a list of parameters — for example, texting requires a contact and a message.

A domain is just a group of related intents. Reading a text and sending a text are both in the messaging domain. Booking a ride and getting a location are in the ride-booking domain. There are domains for making VoIP calls, starting workouts, searching for photos and a few more things. SiriKit’s documentation contains a full list of domains and their intents.

A common criticism of Siri is that it seems unable to handle requests as well as Google and Alexa, and that the third-party voice ecosystem enabled by Apple’s competitors is richer.

I agree with those criticisms. If your app doesn’t fit within the current intents, then you can’t use SiriKit, and there’s nothing you can do. Even if your app does fit, you can’t control all of the words Siri says or understands; so, if you have a particular way of talking about things in your app, you can’t always teach that to Siri.

The hope of iOS developers is both that Apple will greatly expand its list of intents and that its natural language processing becomes much better. If it does that, then we will have a voice assistant that works without developers having to do translation or understand all of the ways of saying the same thing. And implementing support for structured requests is actually fairly simple to do — a lot easier than building a natural language parser.

Another big benefit of the intents framework is that it is not limited to Siri and voice requests. Even now, the Maps app can generate an intents-based request of your app (for example, a restaurant reservation). It does this programmatically (not from voice or natural language). If Apple allowed apps to discover each other’s exposed intents, we’d have a much better way for apps to work together, (as opposed to x-callback style URLs).

Finally, because an intent is a structured request with parameters, there is a simple way for an app to express that parameters are missing or that it needs help distinguishing between some options. Siri can then ask follow-up questions to resolve the parameters without the app needing to conduct the conversation.

The Ride-Booking Domain

To understand domains and intents, let’s look at the ride-booking domain. This is the domain that you would use to ask Siri to get you a Lyft car.

Apple defines how to ask for a ride and how to get information about it, but there is actually no built-in Apple app that can actually handle this request. This is one of the few domains where a SiriKit-enabled app is required.

You can invoke one of the intents via voice or directly from Maps. Some of the intents for this domain are:

Request a ride
Use this one to book a ride. You’ll need to provide a pick-up and drop-off location, and the app might also need to know your party’s size and what kind of ride you want. A sample phrase might be, “Book me a ride with <appname>.”
Get the ride’s status
Use this intent to find out whether your request was received and to get information about the vehicle and driver, including their location. The Maps app uses this intent to show an updated image of the car as it is approaching you.
Cancel a ride
Use this to cancel a ride that you have booked.

For any of this intents, Siri might need to know more information. As you’ll see when we implement an intent handler, your Intents extension can tell Siri that a required parameter is missing, and Siri will prompt the user for it.

The fact that intents can be invoked programmatically by Maps shows how intents might enable inter-app communication in the future.

Note: You can get a full list of domains and their intents on Apple’s developer website. There is also a sample Apple app with many domains and intents implemented, including ride-booking.

Adding Lists And Notes Domain Support To Your App

OK, now that we understand the basics of SiriKit, let’s look at how you would go about adding support for Siri in an app that involves a lot of configuration and a class for each intent you want to handle.

The rest of this article consists of the detailed steps to add Siri support to an app. There are five high-level things you need to do:

Prepare to add a new extension to the app by creating provisioning profiles with new entitlements for it on Apple’s developer website.
Configure your app (via its plist) to use the entitlements.
Use Xcode’s template to get started with some sample code.
Add the code to support your Siri intent.
Configure Siri’s vocabulary via plists.

Don’t worry: We’ll go through each of these, explaining extensions and entitlements along the way.

To focus on just the Siri parts, I’ve prepared a simple to-do list manager, List-o-Mat.

An animated GIF showing a demo of List-o-MatMaking lists in List-o-Mat (Large preview)

You can find the full source of the sample, List-o-Mat, on GitHub.

To create it, all I did was start with the Xcode Master-Detail app template and make both screens into a UITableView. I added a way to add and delete lists and items, and a way to check off items as done. All of the navigation is generated by the template.

To store the data, I used the Codable protocol, (introduced at WWDC 2017), which turns structs into JSON and saves it in a text file in the documents folder.

I’ve deliberately kept the code very simple. If you have any experience with Swift and making view controllers, then you should have no problem with it.

Now we can go through the steps of adding SiriKit support. The high-level steps would be the same for any app and whichever domain and intents you plan to implement. We’ll mostly be dealing with Apple’s developer website, editing plists and writing a bit of Swift.

For List-o-Mat, we’ll focus on the lists and notes domain, which is broadly applicable to things like note-taking apps and to-do lists.

In the lists and notes domain, we have the following intents that would make sense for our app.

Get a list of tasks.
Add a new task to a list.

Because the interactions with Siri actually happen outside of your app (maybe even when you app is not running), iOS uses an extension to implement this.

The Intents Extension

If you have not worked with extensions, you’ll need to know three main things:

An extension is a separate process. It is delivered inside of your app’s bundle, but it runs completely on its own, with its own sandbox.
Your app and extension can communicate with each other by being in the same app group. The easiest way is via the group’s shared sandbox folders (so, they can read and write to the same files if you put them there).
Extensions require their own app IDs, profiles and entitlements.

To add an extension to your app, start by logging into your developer account and going to the “Certificates, Identifiers, & Profiles” section.

Updating Your Apple Developer App Account Data

In our Apple developer account, the first thing we need to do is create an app group. Go to the “App Groups” section under “Identifiers” and add one.

A screenshot of the Apple developer website dialog for registering an app groupRegistering an app group (Large preview)

It must start with group, followed by your usual reverse domain-based identifier. Because it has a prefix, you can use your app’s identifier for the rest.

Then, we need to update our app’s ID to use this group and to enable Siri:

Go to the “App IDs” section and click on your app’s ID;
Click the “Edit” button;
Enable app groups (if not enabled for another extension).
A screenshot of Apple developer website enabling app groups for an app IDEnable app groups (Large preview)

Then configure the app group by clicking the “Edit” button. Choose the app group from before.
A screenshot of the Apple developer website dialog to set the app group nameSet the name of the app group (Large preview)

Enable SiriKit.
A screenshot of SiriKit being enabledEnable SiriKit (Large preview)

Click “Done” to save it.

Now, we need to create a new app ID for our extension:

In the same “App IDs” section, add a new app ID. This will be your app’s identifier, with a suffix. Do not use just Intents as a suffix because this name will become your module’s name in Swift and would then conflict with the real Intents.
A screenshot of the Apple developer screen to create an app IDCreate an app ID for the Intents extension (Large preview)

Enable this app ID for app groups as well (and set up the group as we did before).

Now, create a development provisioning profile for the Intents extension, and regenerate your app’s provisioning profile. Download and install them as you would normally do.

Now that our profiles are installed, we need to go to Xcode and update the app’s entitlements.

Updating Your App’s Entitlements In Xcode

Back in Xcode, choose your project’s name in the project navigator. Then, choose your app’s main target, and go to the “Capabilities” tab. In there, you will see a switch to turn on Siri support.

A screenshot of Xcode’s entitlements screen showing SiriKit is enabledEnable SiriKit in your app’s entitlements. (Large preview)

Further down the list, you can turn on app groups and configure it.

A screenshot of Xcode's entitlements screen showing the app group is enabled and configuredConfigure the app’s app group (Large preview)

If you have set it up correctly, you’ll see this in your app’s .entitlements file:

A screenshot of the App's plist showing that the entitlements are setThe plist shows the entitlements that you set (Large preview)

Now, we are finally ready to add the Intents extension target to our project.

Adding The Intents Extension

We’re finally ready to add the extension. In Xcode, choose “File” → “New Target.” This sheet will pop up:

A screenshot showing the Intents extension in the New Target dialog in XcodeAdd the Intents extension to your project (Large preview)

Choose “Intents Extension” and click the “Next” button. Fill out the following screen:

A screeenshot from Xcode showing how you configure the Intents extensionConfigure the Intents extension (Large preview)

The product name needs to match whatever you made the suffix in the intents app ID on the Apple developer website.

We are choosing not to add an intents UI extension. This isn’t covered in this article, but you could add it later if you need one. Basically, it’s a way to put your own branding and display style into Siri’s visual results.

When you are done, Xcode will create an intents handler class that we can use as a starting part for our Siri implementation.

The Intents Handler: Resolve, Confirm And Handle

Xcode generated a new target that has a starting point for us.

The first thing you have to do is set up this new target to be in the same app group as the app. As before, go to the “Capabilities” tab of the target, and turn on app groups, and configure it with your group name. Remember, apps in the same group have a sandbox that they can use to share files with each other. We need this in order for Siri requests to get to our app.

List-o-Mat has a function that returns the group document folder. We should use it whenever we want to read or write to a shared file.

func documentsFolder() -> URL? {
return FileManager.default.containerURL(forSecurityApplicationGroupIdentifier: “group.com.app-o-mat.ListOMat”)
}

For example, when we save the lists, we use this:

func save(lists: Lists) {
guard let docsDir = documentsFolder() else {
fatalError(“no docs dir”)
}

let url = docsDir.appendingPathComponent(fileName, isDirectory: false)

// Encode lists as JSON and save to url
}

The Intents extension template created a file named IntentHandler.swift, with a class named IntentHandler. It also configured it to be the intents’ entry point in the extension’s plist.

A screenshot from Xcode showing how the IntentHandler is configured as an entry pointThe intent extension plist configures IntentHandler as the entry point

In this same plist, you will see a section to declare the intents we support. We’re going to start with the one that allows searching for lists, which is named INSearchForNotebookItemsIntent. Add it to the array under IntentsSupported.

A screenshot in Xcode showing that the extension plist should list the intents it handlesAdd the intent’s name to the intents plist (Large preview)

Now, go to IntentHandler.swift and replace its contents with this code:

import Intents

class IntentHandler: INExtension {
override func handler(for intent: INIntent) -> Any? {
switch intent {
case is INSearchForNotebookItemsIntent:
return SearchItemsIntentHandler()
default:
return nil
}
}
}

The handler function is called to get an object to handle a specific intent. You can just implement all of the protocols in this class and return self, but we’ll put each intent in its own class to keep it better organized.

Because we intend to have a few different classes, let’s give them a common base class for code that we need to share between them:

class ListOMatIntentsHandler: NSObject {
}

The intents framework requires us to inherit from NSObject. We’ll fill in some methods later.

We start our search implementation with this:

class SearchItemsIntentHandler: ListOMatIntentsHandler,
INSearchForNotebookItemsIntentHandling {
}

To set an intent handler, we need to implement three basic steps

Resolve the parameters.
Make sure required parameters are given, and disambiguate any you don’t fully understand.
Confirm that the request is doable.
This is often optional, but even if you know that each parameter is good, you might still need access to an outside resource or have other requirements.
Handle the request.
Do the thing that is being requested.

INSearchForNotebookItemsIntent, the first intent we’ll implement, can be used as a task search. The kinds of requests we can handle with this are, “In List-o-Mat, show the grocery store list” or “In List-o-Mat, show the store list.”

Aside: “List-o-Mat” is actually a bad name for a SiriKit app because Siri has a hard time with hyphens in apps. Luckily, SiriKit allows us to have alternate names and to provide pronunciation. In the app’s Info.plist, add this section:

A screenshot from Xcode showing that the app plist can add alternate app names and pronunciationsAdd alternate app name’s and pronunciation guides to the app plist

This allows the user to say “list oh mat” and for that to be understood as a single word (without hyphens). It doesn’t look ideal on the screen, but without it, Siri sometimes thinks “List” and “Mat” are separate words and gets very confused.

Resolve: Figuring Out The Parameters

For a search for notebook items, there are several parameters:

the item type (a task, a task list, or a note),
the title of the item,
the content of the item,
the completion status (whether the task is marked done or not),
the location it is associated with,
the date it is associated with.

We require only the first two, so we’ll need to write resolve functions for them. INSearchForNotebookItemsIntent has methods for us to implement.

Because we only care about showing task lists, we’ll hardcode that into the resolve for item type. In SearchItemsIntentHandler, add this:

func resolveItemType(for intent: INSearchForNotebookItemsIntent,
with completion: @escaping (INNotebookItemTypeResolutionResult) -> Void) {

completion(.success(with: .taskList))
}

So, no matter what the user says, we’ll be searching for task lists. If we wanted to expand our search support, we’d let Siri try to figure this out from the original phrase and then just use completion(.needsValue()) if the item type was missing. Alternatively, we could try to guess from the title by seeing what matches it. In this case, we would complete with success when Siri knows what it is, and we would use completion(.notRequired()) when we are going to try multiple possibilities.

Title resolution is a little trickier. What we want is for Siri to use a list if it finds one with an exact match for what you said. If it’s unsure or if there is more than one possibility, then we want Siri to ask us for help in figuring it out. To do this, SiriKit provides a set of resolution enums that let us express what we want to happen next.

So, if you say “Grocery Store,” then Siri would have an exact match. But if you say “Store,” then Siri would present a menu of matching lists.

We’ll start with this function to give the basic structure:

func resolveTitle(for intent: INSearchForNotebookItemsIntent, with completion: @escaping (INSpeakableStringResolutionResult) -> Void) {
guard let title = intent.title else {
completion(.needsValue())
return
}

let possibleLists = getPossibleLists(for: title)
completeResolveListName(with: possibleLists, for: title, with: completion)
}

We’ll implement getPossibleLists(for:) and completeResolveListName(with:for:with:) in the ListOMatIntentsHandler base class.

getPossibleLists(for:) needs to try to fuzzy match the title that Siri passes us with the actual list names.

public func getPossibleLists(for listName: INSpeakableString) -> [INSpeakableString] {
var possibleLists = [INSpeakableString]()
for l in loadLists() {
if l.name.lowercased() == listName.spokenPhrase.lowercased() {
return [INSpeakableString(spokenPhrase: l.name)]
}
if l.name.lowercased().contains(listName.spokenPhrase.lowercased()) || listName.spokenPhrase.lowercased() == “all” {
possibleLists.append(INSpeakableString(spokenPhrase: l.name))
}
}
return possibleLists
}

We loop through all of our lists. If we get an exact match, we’ll return it, and if not, we’ll return an array of possibilities. In this function, we’re simply checking to see whether the word the user said is contained in a list name (so, a pretty simple match). This lets “Grocery” match “Grocery Store.” A more advanced algorithm might try to match based on words that sound the same (for example, with the Soundex algorithm),

completeResolveListName(with:for:with:) is responsible for deciding what to do with this list of possibilities.

public func completeResolveListName(with possibleLists: [INSpeakableString], for listName: INSpeakableString, with completion: @escaping (INSpeakableStringResolutionResult) -> Void) {
switch possibleLists.count {
case 0:
completion(.unsupported())
case 1:
if possibleLists[0].spokenPhrase.lowercased() == listName.spokenPhrase.lowercased() {
completion(.success(with: possibleLists[0]))
} else {
completion(.confirmationRequired(with: possibleLists[0]))
}
default:
completion(.disambiguation(with: possibleLists))
}
}

If we got an exact match, we tell Siri that we succeeded. If we got one inexact match, we tell Siri to ask the user if we guessed it right.

If we got multiple matches, then we use completion(.disambiguation(with: possibleLists)) to tell Siri to show a list and let the user pick one.

Now that we know what the request is, we need to look at the whole thing and make sure we can handle it.

Confirm: Check All Of Your Dependencies

In this case, if we have resolved all of the parameters, we can always handle the request. Typical confirm() implementations might check the availability of external services or check authorization levels.

Because confirm() is optional, we could just do nothing, and Siri would assume we could handle any request with resolved parameters. To be explicit, we could use this:

func confirm(intent: INSearchForNotebookItemsIntent, completion: @escaping (INSearchForNotebookItemsIntentResponse) -> Void) {
completion(INSearchForNotebookItemsIntentResponse(code: .success, userActivity: nil))
}

This means we can handle anything.

Handle: Do It

The final step is to handle the request.

func handle(intent: INSearchForNotebookItemsIntent, completion: @escaping (INSearchForNotebookItemsIntentResponse) -> Void) {
guard
let title = intent.title,
let list = loadLists().filter({ $0.name.lowercased() == title.spokenPhrase.lowercased()}).first
else {
completion(INSearchForNotebookItemsIntentResponse(code: .failure, userActivity: nil))
return
}

let response = INSearchForNotebookItemsIntentResponse(code: .success, userActivity: nil)
response.tasks = list.items.map {
return INTask(title: INSpeakableString(spokenPhrase: $0.name),
status: $0.done ? INTaskStatus.completed : INTaskStatus.notCompleted,
taskType: INTaskType.notCompletable,
spatialEventTrigger: nil,
temporalEventTrigger: nil,
createdDateComponents: nil,
modifiedDateComponents: nil,
identifier: “(list.name)t($0.name)”)
}
completion(response)
}

First, we find the list based on the title. At this point, resolveTitle has already made sure that we’ll get an exact match. But if there’s an issue, we can still return a failure.

When we have a failure, we have the option of passing a user activity. If your app uses Handoff and has a way to handle this exact type of request, then Siri might try deferring to your app to try the request there. It will not do this when we are in a voice-only context (for example, you started with “Hey Siri”), and it doesn’t guarantee that it will do it in other cases, so don’t count on it.

This is now ready to test. Choose the intent extension in the target list in Xcode. But before you run it, edit the scheme.

A screenshot from Xcode showing how to edit a schemeEdit the scheme of the the intent to add a sample phrase for debugging.

That brings up a way to provide a query directly:

A screenshot from Xcode showing the edit scheme dialogAdd the sample phrase to the Run section of the scheme. (Large preview)

Notice, I am using “ListOMat” because of the hyphens issue mentioned above. Luckily, it’s pronounced the same as my app’s name, so it should not be much of an issue.

Back in the app, I made a “Grocery Store” list and a “Hardware Store” list. If I ask Siri for the “store” list, it will go through the disambiguation path, which looks like this:

An animated GIF showing Siri handling a request to show the Store listSiri handles the request by asking for clarification. (Large preview)

If you say “Grocery Store,” then you’ll get an exact match, which goes right to the results.

Adding Items Via Siri

Now that we know the basic concepts of resolve, confirm and handle, we can quickly add an intent to add an item to a list.

First, add INAddTasksIntent to the extension’s plist:

A screenshot in XCode showing the new intent being added to the plistAdd the INAddTasksIntent to the extension plist (Large preview)

Then, update our IntentHandler’s handle function.

override func handler(for intent: INIntent) -> Any? {
switch intent {
case is INSearchForNotebookItemsIntent:
return SearchItemsIntentHandler()
case is INAddTasksIntent:
return AddItemsIntentHandler()
default:
return nil
}
}

Add a stub for the new class:

class AddItemsIntentHandler: ListOMatIntentsHandler, INAddTasksIntentHandling {
}

Adding an item needs a similar resolve for searching, except with a target task list instead of a title.

func resolveTargetTaskList(for intent: INAddTasksIntent, with completion: @escaping (INTaskListResolutionResult) -> Void) {

guard let title = intent.targetTaskList?.title else {
completion(.needsValue())
return
}

let possibleLists = getPossibleLists(for: title)
completeResolveTaskList(with: possibleLists, for: title, with: completion)
}

completeResolveTaskList is just like completeResolveListName, but with slightly different types (a task list instead of the title of a task list).

public func completeResolveTaskList(with possibleLists: [INSpeakableString], for listName: INSpeakableString, with completion: @escaping (INTaskListResolutionResult) -> Void) {

let taskLists = possibleLists.map {
return INTaskList(title: $0, tasks: [], groupName: nil, createdDateComponents: nil, modifiedDateComponents: nil, identifier: nil)
}

switch possibleLists.count {
case 0:
completion(.unsupported())
case 1:
if possibleLists[0].spokenPhrase.lowercased() == listName.spokenPhrase.lowercased() {
completion(.success(with: taskLists[0]))
} else {
completion(.confirmationRequired(with: taskLists[0]))
}
default:
completion(.disambiguation(with: taskLists))
}
}

It has the same disambiguation logic and behaves in exactly the same way. Saying “Store” needs to be disambiguated, and saying “Grocery Store” would be an exact match.

We’ll leave confirm unimplemented and accept the default. For handle, we need to add an item to the list and save it.

func handle(intent: INAddTasksIntent, completion: @escaping (INAddTasksIntentResponse) -> Void) {
var lists = loadLists()
guard
let taskList = intent.targetTaskList,
let listIndex = lists.index(where: { $0.name.lowercased() == taskList.title.spokenPhrase.lowercased() }),
let itemNames = intent.taskTitles, itemNames.count > 0
else {
completion(INAddTasksIntentResponse(code: .failure, userActivity: nil))
return
}

// Get the list
var list = lists[listIndex]

// Add the items
var addedTasks = [INTask]()
for item in itemNames {
list.addItem(name: item.spokenPhrase, at: list.items.count)
addedTasks.append(INTask(title: item, status: .notCompleted, taskType: .notCompletable, spatialEventTrigger: nil, temporalEventTrigger: nil, createdDateComponents: nil, modifiedDateComponents: nil, identifier: nil))
}

// Save the new list
lists[listIndex] = list
save(lists: lists)

// Respond with the added items
let response = INAddTasksIntentResponse(code: .success, userActivity: nil)
response.addedTasks = addedTasks
completion(response)
}

We get a list of items and a target list. We look up the list and add the items. We also need to prepare a response for Siri to show with the added items and send it to the completion function.

This function can handle a phrase like, “In ListOMat, add apples to the grocery list.” It can also handle a list of items like, “rice, onions and olives.”

A screenshot of the simulator showing Siri adding items to the grocery store listSiri adds a few items to the grocery store list

Almost Done, Just A Few More Settings

All of this will work in your simulator or local device, but if you want to submit this, you’ll need to add a NSSiriUsageDescription key to your app’s plist, with a string that describes what you are using Siri for. Something like “Your requests about lists will be sent to Siri” is fine.

You should also add a call to:

INPreferences.requestSiriAuthorization { (status) in }

Put this in your main view controller’s viewDidLoad to ask the user for Siri access. This will show the message you configured above and also let the user know that they could be using Siri for this app.

A screenshot of the dialog that a device pops up when you ask for Siri permissionThe device will ask for permission if you try to use Siri in the app.

Finally, you’ll need to tell Siri what to tell the user if the user asks what your app can do, by providing some sample phrases:

Create a plist file in your app (not the extension), named AppIntentVocabulary.plist.
Fill out the intents and phrases that you support.

A screenshot of the AppIntentVocabulary.plist showing sample phrasesAdd an AppIntentVocabulary.plist to list the sample phrases that will invoke the intent you handle. (Large preview)

There is no way to really know all of the phrases that Siri will use for an intent, but Apple does provide a few samples for each intent in its documentation. The sample phrases for task-list searching show us that Siri can understand “Show me all my notes on <appName>,” but I found other phrases by trial and error (for example, Siri understands what “lists” are too, not just notes).

Summary

As you can see, adding Siri support to an app has a lot of steps, with a lot of configuration. But the code needed to handle the requests was fairly simple.

There are a lot of steps, but each one is small, and you might be familiar with a few of them if you have used extensions before.

Here is what you’ll need to prepare for a new extension on Apple’s developer website:

Make an app ID for an Intents extension.
Make an app group if you don’t already have one.
Use the app group in the app ID for the app and extension.
Add Siri support to the app’s ID.
Regenerate the profiles and download them.

And here are the steps in Xcode for creating Siri’s Intents extension:

Add an Intents extension using the Xcode template.
Update the entitlements of the app and extension to match the profiles (groups and Siri support).
Add your intents to the extension’s plist.

And you’ll need to add code to do the following things:

Use the app group sandbox to communicate between the app and extension.
Add classes to support each intent with resolve, confirm and handle functions.
Update the generated IntentHandler to use those classes.
Ask for Siri access somewhere in your app.

Finally, there are some Siri-specific configuration settings:

Add the Siri support security string to your app’s plist.
Add sample phrases to an AppIntentVocabulary.plist file in your app.
Run the intent target to test; edit the scheme to provide the phrase.

OK, that is a lot, but if your app fits one of Siri’s domains, then users will expect that they can interact with it via voice. And because the competition for voice assistants is so good, we can only expect that WWDC 2018 will bring a bunch more domains and, hopefully, much better Siri.

Further Reading

“SiriKit,” Apple
The technical documentation contains the full list of domains and intents.
“Guides and Sample Code,” Apple
Includes code for many domains.
“Introducing SiriKit” (video, Safari only), WWDC 2016 Apple
“What’s New in SiriKit” (video, Safari only), WWDC 2017, Apple
Apple introduces lists and notes
“Lists and Notes,” Apple
The full list of lists and notes intents.

Smashing Editorial
(da, ra, al, il)

20 Free Plugins for WordPress Slideshows & Galleries

Original Source: http://feedproxy.google.com/~r/1stwebdesigner/~3/4mrsJPorDtw/

Image slideshows and galleries are pretty much a necessity these days. Good thing there are a ton of powerful, yet free WordPress plugins to choose from that can get the job done for you.

Because there are so many plugins to choose from, making the right choice is that much more difficult.

Here are our 20 choice picks for all of your gallery and slideshow needs. Not only are these plugins free, but we’ve also put some consideration into performance, as well. After all, speed matters just as much as functionality.

You might also like our collection of the Top 100 WordPress Plugins for 2017.

WordPress Slideshows Plugins

Soliloquy Lite

The free version of the popular slider plugin, Soliloquy Lite is an easy-to-use and flexible way to add great-looking slideshows to your WordPress website. Sliders you create are fully responsive and look great on any device.

Soliloquy Lite

Meta Slider

What makes Meta Slider so unique is that it utilizes four different types of slideshow scripts: Flex Slider 2, Nivo Slider, Responsive Slides and Coin Slider. Each has their own selection of transition effects. You have the ability to choose the one that works best for you.

Meta Slider

Smart Slider 3

Smart Slider 3 gives you features that are normally reserved for “pro” versions (even though there is a paid version available here as well). The top-shelf features in the free version include dynamic slides – which can be automatically created from your WordPress posts. Also featured are video slides from YouTube and Vimeo content.

Smart Slider 3

Ultimate Responsive Image Slider

Beyond its responsive namesake, Ultimate Responsive Image Slider sports a plethora of options to help you tailor a slideshow to your specific needs. For example, there’s now an option for Auto Height, which will allow slides of different heights to transition smoothly.

Ultimate Responsive Image Slider

WordPress Carousel Free

Carousel sliders are often a handy way to display sponsored logos. With WordPress Carousel Free, you’ll be able to create a responsive carousel with mobile touch support.

WordPress Carousel Free

WP Slick Slider and Image Carousel

Choose between an image carousel and a more traditional slideshow with WP Slick and Image Carousel. The plugin comes with pre-defined designs to ensure a great look.

WP Slick Slider and Image Carousel

Tribulant Slideshow Gallery

Tribulant Slideshow Gallery features a variety of styles – including the ability to display slide thumbnails above or below a slider. Through the use of WordPress Shortcodes, there are many customization options available.

Tribulant Slideshow Gallery

Genesis Responsive Slider

If you’re using the Genesis Framework for creating WordPress themes, then the Genesis Responsive Slider will fit right in. The plugin will take the featured image, title and excerpt from a page or post and create a simple slideshow.

Genesis Responsive Slider

Cyclone Slider 2

Cyclone Slider 2 is billed as being easy enough for beginners and powerful enough for hardcore developers. The ability to create custom templates speaks to its appeal for web professionals, while being able to choose from a ready-made template lends itself to the novice.

Cyclone Slider 2

Meteor Slides

If you’re into having a lot of choices when it comes to slide transition styles, Meteor Slides has you covered with over 20 of them. The mobile-friendly slideshows are also compatible with WordPress Multisite.

Meteor Slides

WordPress Galleries Plugins

Envira Gallery Lite

The cousin of Soliloquy (mentioned above), Envira Gallery Lite shares a very similar interface and feature set. Galleries are optimized for both speed and SEO.

Envira Gallery Lite

Gallery

Gallery features a number of ways to display photos on your site. You can use a traditional photo gallery, a photo album made up of several galleries, a slideshow or a single-image browser.

Gallery

Simple Lightbox

Simple Lightbox is a highly-customizable and lightweight plugin for displaying attractive lightboxes for your photos. Using themes, you’re able to customize the look and layout. Links can be automatically activated so that you won’t have to manually code them in.

Simple Lightbox

FooBox Image Lightbox

FooBox Image Lightbox is a bit unique in that it works in conjunction with an already existing WordPress gallery plugin (including their own FooGallery) and adds a slick, responsive lightbox. It also works directly with WordPress galleries and captioned images.

FooBox Image Lightbox

Gallery by BestWebSoft

Gallery by BestWebSoft helps you easily build an unlimited number of photo galleries and albums. You may custom sort galleries using a number of different criteria. And, everything you create will be responsive to ensure mobile compatibility.

Gallery by BestWebSoft

Easy FancyBox

Easy FancyBox will automatically link your .gif, .jpg and .png images inserted from the WordPress Media Library to open in a beautiful FancyBox lightbox. It also supports native WordPress Galleries and the popular NextGen Gallery plugin.

Easy FancyBox

Unite Gallery Lite

Aimed at being both a video and image gallery solution, Unite Gallery Lite is built for speed and power. It’s responsive, touch-enabled and even has a zoom feature.

Unite Gallery Lite

Instagram Gallery

Bring in images from your Instagram account and display them as either a photo gallery or slideshow. As you add images to Instagram, the plugin will automatically add them to your WordPress site as well.

Instagram Gallery

Awesome Flickr Gallery

If you’re using Flickr, Awesome Flickr Gallery is quite a flexible solution for displaying galleries and albums on your WordPress site. It’s compatible with both public and private photos, offers image size and cropping customization, and can be spiffed-up via CSS.

Awesome Flickr Gallery

Photoswipe Masonry Gallery

Using the PhotoSwipe JS gallery, Photoswipe Masonry Gallery turns standard WordPress Galleries into attractive masonry-style displays. Masonry is great for times when you want to display thumbnails of different sizes in a neat and orderly manner.

Photoswipe Masonry Gallery

Define Your Image

Now that we’ve found 20 of the top free slideshow and gallery plugins for WordPress, it’s time to put them to good use. Find the one(s) that best represent your vision and share it with the world!