Axure RP is regarded as a software with a steep learning curve, and the reason is clear: it has a lot to offer, there are a lot of features packed in and this is not so obvious at first glance.
In general, you can do a lot in Axure RP. You can do wireframes, diagrams or high-fidelity UIs (like you usually do in Sketch, Figma, Adobe XD), but in this article, we will focus on the prototyping interactions part. So, I’m not going to do a deep dive into Axure interface and features simply because it is not the scope of this article and I would probably need to write an entire chapter of a book.
You don’t need to have any previous experience with Axure to follow along with this article, but if you do have, maybe this article can help you to shed some light on some core concepts related to interactions and prototyping in general, that are usually misunderstood or not so obvious to designers.
Axure RP is a powerful prototyping software with a lot of history. It has been around for many years and is available for Windows and Mac. Axure gained a lot of functionality over time, and today you can look at Axure as an all-in-one tool. You can basically create a lot of UI/UX artefacts without leaving Axure.
Designers coming from other screen design tools like Sketch, Figma, and Adobe XD can find it difficult to understand how Axure RP is actually working, the terminology, tools and how to build an interactive prototype.
Prototyping is an important step in UX methodology and can play a big role in improving the user experience. A prototype is a simulation, usually used to help understand and test how an application (or feature) works, what it does and how to interact with it.
In order to create any kind of prototype, you will need to add interactions to some of your design elements.
If you have used screen design tools like Sketch, Figma, Adobe XD or any other general design tool like Photoshop, or Affinity, then you are pretty much used to the tool metaphor. Tool metaphor? Let me explain.
The way these apps model interaction is by using existing mental models that people have with tools in real life.
In real life, when you want to build something, you use different tools to achieve this. So, for instance, if you want to do a watercolor painting, you have to use a glass of water and brushes. Both the brushes and the glass of water are tools to help you place the colors on the canvas to accomplish your watercolor masterpiece.
In software applications, you can use these existing mental models and add on top of them, conceptual models.
Next, I’m going to try to explain the conceptual models used in these kinds of applications by breaking them into levels. Each new level adds more abstraction and knowledge on top of the previous one.
So... let’s take, for instance, Affinity Photo (which is very similar to Photoshop) and see how this conceptual model that I’ve described earlier it’s used. All the available tools are represented by an icon in the tool's toolbar. You interact with the tools in the same way you interact in real life: you pick the tool and use it to draw something (of course we need to make sure we’ve set the right colors) on the artboard (just a note here, I’ve simplified this example a lot, but you get the idea).
We, as designers, interact with the same concepts and, in most cases, the same terminology from real life, like:
As you can see, these tools help us to place the colors on the artboard. The way the colors are applied is in our direct control through the tool.
So the conceptual flow for the first level has these steps:
Let’s now go a bit further and continue with our example of using Affinity Photo and try to use the rectangle tool which adds a vector (object) rectangle on the artboard. If you are familiar with vector drawing apps, then there is nothing new up to this point everything should be familiar.
But there is something different here: we’ve used a special tool, the rectangle tool in our case, to place a rectangle object on the artboard; basically, we no longer place colors directly, but the object renders its geometry using some attributes that we can change.
This is actually a big change because behind the scenes the software is doing the heavy lifting like drawing the geometry using sophisticated mathematical functions. All we do is to change some of the properties that impact the behaviour of these mathematical functions.
It’s interesting to note that, on this level, the tool and the conceptual model are already more abstract than on the first level, but for now we don’t have problems adapting because we used this model in other similar apps.
Let’s recap: we can use special tools (e.g. rectangle tool, pen tool, ellipse tool and so on) to create objects and then manipulate their properties to fit our needs.
So the conceptual flow for the second level looks like this:
Now it’s time to leave the familiar world of Affinity Photo and move to Axure RP where we have similar objects like the ones that I’ve previously described, but there are some key differences.
One key difference is related to the terminology.
Another key difference is in the way we create these objects and place them on the page. In Axure, in general, we create new objects by dragging them from the Library pane into the page. This is a little different conceptual model than the one we’ve used before. The Library pane holds some ‘templates’ (sometimes referred to as blueprints) that we can use to create new objects.
However, as a quick note, Axure allows us to use the tools conceptual model as well but the type of objects that we can create is more limited.
So, why is Axure calling these objects widgets? Simply put, they are a little more advanced. As in the second level example, they have properties that can be adjusted but they can also respond to events.
This possibility to respond to events is the key ingredient in creating any kind of interaction (interactivity) in Axure and in many other similar tools which are heavily influenced by Axure.
To sum up, the conceptual flow looks like this:
Now that you have been introduced to Axure widgets, let’s explore them in more depth.
Within the app, Axure groups these widgets into categories which are inside libraries, and they are available within the app in the Library pane. These widgets sometimes are listed multiple times under a slightly different name but with different default settings. For instance, the rectangle widget is available as Box 1, Box 2, Box 3, Primary Button with some different initial properties like: fill color, border, corners and so on. In essence, they are the same widget but with a setup for a predefined purpose (again, like a template).
On Axure documentation these widgets are grouped based on their packed functionality in 2 categories like basic and advanced but, I would group them in 3 categories:
Basic (in this category are shape widgets that are familiar in general with any UI/UX designer and some specific ones used only in some cases)
In this category we have the following widgets:
Semi-advanced (these offer additional user input in general or are part of the common UI patterns)
Advanced (these have some advanced functionalities that I’ll touch on in a future article)
Each of these widgets has a set of common properties and events and specific ones.
Axure interaction model
The interaction model is inspired by RAD (Rapid Application Development) tools in general, and one difference is that we don’t write any code.
In order to make it easier to follow, let me introduce you to some terminology:
If you don’t fully understand what they are, don’t worry I’ll explain them and their relation down below.
In order to build (design) our prototype we place inside a page multiple widgets and adjust their properties and/or respond to different events (like the Click or Tap event) using actions. Also, a prototype can have one or more pages depending on the needs.
Let’s see how this model works in Axure (see the image below).
So, each widget can listen to events, the events are triggered by an interaction and the response to an event can be an action or a list of actions.
Now that you’ve seen the relation between these concepts, you should have enough understanding about the underlying interaction model.
This is it for now: in future articles, we will learn more about these interactions, and how to use them, and also we will learn about other concepts like states, conditions and expressions.