Vulnsy
Guide

A Pentester's Guide to the DREAD Risk Assessment Model

By Luke Turvey1 March 202625 min read
A Pentester's Guide to the DREAD Risk Assessment Model

When you’re staring down a long list of vulnerabilities from a penetration test, the big question is always the same: where do you start? The DREAD model is a qualitative risk assessment framework designed to help answer exactly that.

The acronym stands for Damage, Reproducibility, Exploitability, Affected Users, and Discoverability. It’s a straightforward method for prioritising security threats by forcing you to think about their real-world impact.

Unpacking the DREAD Model and Its Purpose

Think of a security professional acting like a triage nurse in a busy A&E. As new vulnerabilities come in, they need to quickly sort the critical injuries from the minor scrapes. DREAD provides the structure for this rapid assessment, helping turn technical jargon into clear, actionable priorities.

Originally developed at Microsoft, the model was prized for its simplicity and speed. Although Microsoft has since adopted different models, DREAD’s practical, no-nonsense approach has given it real staying power in the security community. It's especially useful in fast-paced pentesting engagements where you just need to get things prioritised—and fast.

The goal isn't to calculate a perfect, absolute number. Instead, it’s about creating a consistent and defensible thought process for ranking risks.

Why Pentesters Still Use DREAD

DREAD’s enduring appeal lies in its ability to answer the one question every stakeholder asks: "So what? How bad is this, really?" By breaking risk down into five distinct categories, it pushes testers to look beyond the technical details and consider the actual business impact.

Here’s why it works so well in practice:

  • Clearer Communication: It translates complex technical risks into a simple scoring system that leaders outside of the IT department can actually understand. This makes getting buy-in for fixes so much easier.
  • Faster Triage: When you have dozens of findings, you can use DREAD to score and rank them quickly, ensuring your team’s effort is focused where it matters most.
  • Consistent Prioritisation: It creates a shared language for the whole team. When everyone uses the same framework, you get much more consistent and aligned priorities from one project to the next.

The real value of the DREAD model is the structured logic it brings to the often-chaotic process of vulnerability management. It’s less about the final score and more about the quality of the conversation it provokes.

At its heart, the DREAD risk assessment model is a mental checklist that channels an expert's intuition. It ensures you’re evaluating every threat not just on a technical level, but on what it could genuinely do to the organisation. By using DREAD, pentesters can transform a raw list of vulnerabilities into a strategic roadmap that safeguards a company’s most critical assets.

Breaking Down the Five DREAD Components

To really get to grips with the DREAD model, you need to understand the thinking behind each of its five parts. These aren't just random categories; they're specific lenses for examining a vulnerability and figuring out what it truly means for the business. By scoring each one, you turn an abstract threat into a concrete, prioritised risk.

This process forces a structured conversation about what could go wrong. It shifts the analysis from a purely technical "what is it?" to a business-focused "what does it mean for us?". This is what makes DREAD such a valuable communication tool for any pentester—it frames risk in a way that helps everyone, from developers to the C-suite, understand the urgency.

The infographic below shows how DREAD acts as a central engine for incident response, helping teams triage, prioritise, and communicate security issues effectively.

Infographic showing the DREAD model for incident response, outlining steps for triage, prioritization, and communication.

As you can see, DREAD provides a clear path for making decisions, ensuring the biggest threats get tackled first. Let's explore each component, using a running example of a critical SQL injection vulnerability found in a customer-facing e-commerce application to make the scoring process tangible.

H3: Damage

First up is Damage, which is arguably the most important factor. This metric quantifies the raw impact if an exploit succeeds. It directly answers the question that’s always on a stakeholder's mind: how bad could this really be?

When thinking about Damage, ask yourself: If this vulnerability is exploited, what’s the worst that could happen to our data, systems, reputation, or finances?

To score Damage, you have to think about the worst-case scenario. Are we talking about a minor information leak, like a server version number, or a full-blown database compromise that triggers a massive data breach? The difference is everything.

Example Score (SQL Injection): Our SQL injection vulnerability lets an attacker bypass authentication and dump the entire customer database, complete with personal details and order histories. The potential for a major data breach, steep regulatory fines under GDPR, and catastrophic reputational harm is enormous. For this reason, we’d assign Damage a score of 10.

H3: Reproducibility

Next, we look at Reproducibility. This component measures how reliably an attacker can pull off the exploit. An attack that works flawlessly every single time is far more dangerous than one that only works under a rare, specific set of conditions.

Think of it this way: a highly reproducible vulnerability means that once an attacker figures it out, they can repeat the attack at will, scaling their efforts without any friction. That consistency dramatically increases the overall risk.

Example Score (SQL Injection): The SQL injection is triggered with a simple, well-known payload sent to a specific API endpoint. It works 100% of the time for any attacker who knows the technique. It's a sure thing. Therefore, we confidently assign Reproducibility a score of 10.

H3: Exploitability

Exploitability assesses the skill, effort, and resources an attacker needs to actually carry out the attack. It considers things like whether they need specialist tools, deep technical knowledge, or even physical access to a machine.

For Exploitability, the key question is: How easy is it for an attacker to actually do this?

An exploit that a novice can trigger with a free script is a world away from one that requires a state-sponsored actor with a custom-built toolkit. This metric helps us separate the theoretical risks from the immediate, practical threats we need to worry about right now.

Example Score (SQL Injection): This vulnerability can be exploited using common, free tools like sqlmap. The technique is documented all over the internet, so it requires minimal technical expertise to execute. The barrier to entry is practically non-existent, which justifies an Exploitability score of 9.

H3: Affected Users

The fourth component, Affected Users, measures the scale of the impact. It's all about how many people or systems are in the blast radius. A vulnerability that hits a single administrator has a very different risk profile from one that impacts every single customer.

This metric is often expressed as a percentage or a raw number of users. It gives crucial context to the Damage score; a high-damage attack affecting only one person might be less of a priority than a medium-damage one affecting millions.

Example Score (SQL Injection): The application has a user base of 500,000 registered customers. Since the SQL injection compromises the entire user database, 100% of users are affected. This is a worst-case scenario in terms of scale, warranting a clear Affected Users score of 10.

H3: Discoverability

Finally, we have Discoverability. This component measures how easy it is for an attacker to find the vulnerability in the first place. Is the flaw sitting in plain sight on the login page, or is it buried deep within complex code that rarely gets touched?

A highly discoverable vulnerability is like a ticking time bomb. It's only a matter of time before someone—maliciously or accidentally—stumbles upon it. While obscurity is never a reliable security control, it can influence the immediate risk level.

Example Score (SQL Injection): The vulnerable parameter is in a standard search field on the main product page. Any attacker probing for common web vulnerabilities would almost certainly find it within minutes. It’s low-hanging fruit. This high visibility means we assign Discoverability a score of 9.

DREAD Component Scoring Guide

To help standardise this process, security teams often use a scoring guide. The table below provides a reference for what Low, Medium, and High scores typically mean for each DREAD component on a 1-10 scale.

Component Low Score (1-3) Meaning Medium Score (4-7) Meaning High Score (8-10) Meaning
Damage Minor, non-sensitive data exposure; temporary application defacement. Unauthorised access to internal data; partial service disruption. Full system compromise; theft of PII/financial data; major regulatory fines.
Reproducibility Difficult to reproduce, even for the original finder; requires very specific timing or conditions. Exploit works most of the time but has some intermittent failures. Exploit works reliably every single time.
Exploitability Requires expert skills, custom tools, or physical access. The barrier to entry is very high. Requires a moderately skilled attacker using common tools and techniques. Can be exploited by a novice using a web browser or publicly available script.
Affected Users Affects only a small group of non-critical users or a single internal system. Affects a significant portion of the user base or a critical internal system. Affects all users, administrators, or the entire infrastructure.
Discoverability Buried deep in source code; requires authenticated access and complex steps to find. Can be found by a skilled attacker with standard tools after some effort. Obvious and easy to find on a public-facing page with minimal effort.

Using a guide like this helps ensure consistency across different assessments and testers, making the final risk scores more reliable and defensible. By methodically evaluating each of these five areas, we build a multi-faceted and truly comprehensive view of the risk.

How to Calculate and Interpret a DREAD Risk Score

Desk scene with an open notebook, pen, and calculator, displaying 'RISK SCORE'.

So, you’ve worked through each of the five DREAD components and assigned a score to each one. What now? The next step is to combine them into a single, overall risk score. This is where individual ratings transform into a clear, prioritised action list.

While you could use different methods, the most common and straightforward approach is simply to calculate the average.

The formula is as simple as it looks:

(D + R + E + A + D) / 5 = Final Risk Score

This calculation gives you one number, usually between 1 and 10, that represents the vulnerability's total risk. But remember, the real skill isn’t in the arithmetic. It’s in the experience and consistent logic you apply to justify each of those initial scores.

Let’s walk through two very different scenarios to see the DREAD model in action.

Scenario 1: High-Risk Remote Code Execution

Imagine a penetration tester uncovers a Remote Code Execution (RCE) vulnerability on a core production server—the one hosting the company's main web application. This is the kind of finding that gets everyone's attention, and the DREAD score will absolutely reflect that severity.

Here’s how a seasoned professional might break it down:

  • Damage (10): A full server compromise is catastrophic. An attacker could steal all data, shut down services, and pivot to attack other systems on the network.
  • Reproducibility (10): The exploit works perfectly every single time a specific malicious request is sent. It's completely reliable.
  • Exploitability (8): The exploit script is publicly available, though it needs a little customisation. It's well within the reach of a moderately skilled attacker.
  • Affected Users (10): Since it’s a core application server, the entire user base is affected. Business operations would grind to a halt.
  • Discoverability (7): The vulnerability isn't immediately obvious, but a determined attacker using standard scanning tools will find it without too much trouble.

Now, let's do the maths:

(10 + 10 + 8 + 10 + 7) / 5 = **9.0**

A score of 9.0 sends a crystal-clear message: this is a critical threat that demands immediate remediation.

Scenario 2: Low-Risk Missing Security Header

During another test, the pentester notices the web application is missing the X-Content-Type-Options security header. While implementing this header is certainly a best practice, its absence rarely poses a direct, significant risk in most modern browsers.

Let's score this finding:

  • Damage (2): The potential damage is very low. At worst, it could enable niche attacks against users with ancient, non-compliant browsers. There’s no direct path to data compromise.
  • Reproducibility (10): The header is either there or it isn't. Its absence is a constant state, making the condition perfectly reproducible.
  • Exploitability (3): Actually exploiting this requires chaining it with other, more serious vulnerabilities and targeting a user with an outdated browser. It's a high-effort, low-reward attack.
  • Affected Users (5): It technically affects everyone who visits the site, but only a tiny fraction running legacy software are truly vulnerable.
  • Discoverability (10): Spotting a missing header is trivial. It takes seconds with a browser's developer tools or a simple curl command.

The calculation for this much lower-risk issue looks quite different:

(2 + 10 + 3 + 5 + 10) / 5 = **6.0**

This score really shows the DREAD model's strength. It correctly balances the high scores for Discoverability and Reproducibility with the very low score for Damage, resulting in a moderate, more realistic risk rating.

From Numbers to Actionable Priorities

A numerical score is a great start, but translating it into a simple, qualitative label is what makes it instantly understandable for everyone, especially non-technical managers and executives. This is why most organisations map scores to a tiered priority system.

The ultimate goal of scoring is to create a clear priority list. A numerical score of 8.6 is less intuitive than a label like 'Critical'. This translation is key for effective reporting.

A common mapping structure looks something like this:

Score Range Risk Level Recommended Action
8.0 - 10 Critical Fix immediately; may require stopping services or emergency patching.
6.0 - 7.9 High Fix within the next patch cycle (e.g., 30 days).
4.0 - 5.9 Medium Address within a reasonable timeframe (e.g., 90 days).
1.0 - 3.9 Low Fix when time permits or formally accept the risk.

Using this table, our RCE vulnerability (9.0) is undeniably Critical. The missing header (6.0) lands squarely in the High category, though some teams might argue for Medium. This is where having clear, consistent internal standards becomes vital. It ensures every report your team produces is clear, justifiable, and most importantly, actionable.

A Balanced Look at the DREAD Model: Pros and Cons

No risk assessment model is a silver bullet, and DREAD is certainly no exception. It was designed for a specific purpose—to be fast and clear—but that very simplicity brings its own set of limitations. Before you bake it into your pentesting workflow, it’s worth taking an honest look at both its strengths and its weaknesses.

The key is understanding this trade-off. You’re not looking for a perfect system, but the right tool for the job at hand.

Where DREAD Shines

DREAD’s greatest strength is its sheer simplicity. In certain scenarios, especially when you’re up against the clock, this straightforward approach is exactly what you need.

Here’s where it really comes into its own:

  • Fast and Intuitive: With just five easy-to-grasp categories, a pentester can score a vulnerability in moments. There are no complicated calculations to wrestle with, making it perfect for rapid-fire triage during an assessment.
  • Cuts Through the Noise: A simple 1-10 score is far easier for a non-technical board member to digest than a complex CVSS vector string. Walking into a meeting and highlighting a high "Damage" score has an immediate, visceral impact.
  • Connects Flaws to Business Risk: The "Damage" and "Affected Users" elements force a conversation about real-world consequences. It reframes a technical bug into a tangible business problem, which is exactly the language leaders understand.

DREAD is fantastic for getting the risk conversation started. Its real power isn’t in its mathematical precision; it’s in its ability to quickly turn a technical finding into a compelling story about business impact—a story that gets people to sit up and take action.

This makes the DREAD model a great choice for initial prioritisation or any situation that demands clear, high-level communication.

The Drawbacks and Common Criticisms

For all its benefits, DREAD has its fair share of critics. In fact, these criticisms were significant enough that its creator, Microsoft, eventually moved away from it in favour of more granular models. It's crucial to understand these drawbacks before you commit to using it.

The most common complaints you'll hear are:

  • It’s Highly Subjective: This is DREAD’s biggest Achilles' heel. Two pentesters, looking at the exact same vulnerability, can easily come up with different scores based on their personal experience and perspective. This can create inconsistencies across reports.
  • The Problem with Averages: Simply averaging the five scores can dangerously obscure the real risk. For instance, a vulnerability with a catastrophic Damage score of 10 could have its overall score watered down by low scores elsewhere, making a critical threat seem less urgent than it is.
  • It's Officially Deprecated: The fact that Microsoft no longer formally endorses its own creation can sometimes weaken its credibility. Some clients may push back, preferring a more contemporary and industry-standard framework like CVSS.

There's also the human element of "dread" itself to consider. Sometimes, a certain type of vulnerability just feels more frightening to a client, regardless of its final score. This psychological weight is something a simple average can’t capture, but it’s a very real factor when you’re trying to secure budget for remediation.

This is especially true in the UK, where the anxiety around cyber threats is palpable. Even when 71% of organisations believe their cybersecurity budgets are sufficient, a staggering 47% of leaders report difficulty getting boardroom support for new security initiatives. This is happening even as attacks impacted 71% of UK firms last year. Platforms like Vulnsy help testers translate this abstract dread into concrete reports, embedding hard-hitting stats like the £2.5 million average cost of incident recovery to close that communication gap. To get a better feel for the current climate, you can read more about the latest UK cybersecurity trends and statistics.

Ultimately, DREAD’s value is all in how you use it. For quick, internal prioritisation and communicating the big picture, it’s still a powerful and relevant tool. But for formal, standardised reporting, you might find it’s better used as a supplement to, or replaced by, other models.

Comparing DREAD to Other Risk Rating Models

No risk assessment model is a silver bullet, and DREAD is certainly no exception. Its greatest strength is its simplicity, but that very quality can also be a weakness in certain situations. To pick the right tool for the job, it’s vital to understand how DREAD stacks up against other popular frameworks.

Let’s put DREAD side-by-side with two of the most widely used alternatives: the Common Vulnerability Scoring System (CVSS) and the OWASP Risk Rating Methodology. The point isn’t to crown a winner but to give you a clear view of their strengths, helping you decide whether you need rapid triage or a detailed, formal report.

DREAD Versus CVSS

The most frequent comparison you'll hear is DREAD versus CVSS. A good way to think about it is that CVSS is the highly structured, almost scientific approach to risk scoring. It’s an open standard built to create objective and repeatable scores that can be compared across different systems and organisations.

CVSS is incredibly thorough, breaking down risk into Base, Temporal, and Environmental metrics. This level of detail is what makes it the gold standard for public vulnerability databases like the NVD. It produces a complex vector string that translates into a precise score out of 10.0, which is great for standardisation.

However, that rigidity can sometimes feel like a straitjacket. When you're in the middle of a custom application pentest, CVSS can feel too inflexible. It doesn’t always capture the unique business context or the specific threat scenarios that are top-of-mind for your client. This is exactly where a more qualitative model like DREAD shines. For a closer look at its metrics, you can learn more about how CVSS works in our detailed guide.

DREAD Versus the OWASP Risk Rating Methodology

The OWASP Risk Rating Methodology offers a different flavour altogether. Like DREAD, it’s more qualitative and context-driven than CVSS, but it goes deeper by focusing on the threat agent themselves and the specific business impact of an attack.

OWASP splits risk into two main factors: Likelihood and Impact. Each of these is then built from several smaller components. For instance, Likelihood considers the threat agent's skill level and motive, while Impact evaluates potential financial damage and reputational harm.

The OWASP model essentially encourages a narrative-driven risk assessment. It pushes you to build a story around a potential attack: Who is the attacker? What do they want? How might they get it? This approach is fantastic for in-depth application security reviews where business context is everything.

This focus on the "who" and "why" behind an attack delivers a rich, contextualised picture of risk. The trade-off? It requires more time and a much deeper understanding of the business than DREAD’s quick-and-dirty five-question method.

DREAD vs CVSS vs OWASP: A Comparative Overview

So, which model should you choose? It all comes down to your objective. Are you trying to quickly sort through a long list of findings? Do you need a universally understood score for compliance? Or are you performing a deep-dive analysis of threats unique to a specific business?

The table below summarises the key differences to help you decide.

Attribute DREAD CVSS OWASP
Primary Use Case Rapid triage, internal prioritisation, and clear communication with non-technical stakeholders. Standardised vulnerability scoring for public databases and cross-organisational comparison. In-depth, context-aware risk analysis for specific applications and business processes.
Complexity Low. A simple five-question model with an averaged score. High. A multi-layered system with complex vector calculations. Medium. Requires evaluating multiple factors for both likelihood and impact.
Subjectivity High. Scores heavily depend on the tester’s individual experience and judgement. Low. Highly standardised to produce objective, repeatable scores. Medium. Incorporates subjective business context but within a structured framework.
Output A single score from 1-10, often mapped to a qualitative label like "High" or "Medium." A precise numerical score (e.g., 9.8) and a detailed vector string. A qualitative rating (e.g., Critical, High, Medium, Low) based on likelihood and impact.

Ultimately, these models aren't in competition with one another. A seasoned pentester knows they are all just tools in a toolkit. DREAD is invaluable for quick assessments and straightforward client communication, while CVSS and OWASP provide the rigour needed for more formal and detailed analysis. The real art is knowing which tool to pull out and when.

Integrating DREAD into Your Pentesting Workflow

Knowing what the DREAD risk assessment model is and knowing how to use it are two different things. This is where the theory hits the road—by building it directly into your pentesting workflow, you turn an abstract concept into a practical tool that saves you time and brings real clarity to your reports. The goal is to make scoring findings in real-time a repeatable, defensible habit.

This practical application is more important than ever. In the UK, for instance, there's a clear need for better risk quantification. Recent research showed that only 48% of small and medium-sized businesses bothered to conduct a cyber risk assessment in 2025. This is a worrying statistic, especially when you consider the average cost of a major attack is climbing towards £195,000. You can dig deeper into the data and the challenges UK businesses are facing in the latest cybersecurity breaches survey findings.

A Step-by-Step Application Guide

The good news is that weaving DREAD into your process doesn't mean you have to rip up your current workflow. It’s more about adding a structured layer on top of what you already do when documenting and reporting findings.

Here’s a simple way to approach it:

  1. Document as You Go: The moment you identify a vulnerability, score it. Don't put it off until the reporting stage. The context is fresh in your mind, and your scoring will be far more accurate.
  2. Use a Template: Create a standard DREAD scoring section in your report template or finding library. This ensures every vulnerability gets evaluated against the same five criteria, which is a huge step towards consistency, especially across a team.
  3. Calculate and Prioritise: With the scores in place, run the numbers to get the final risk rating. You can then use this score to sort your findings from most to least critical, giving the client a clear, prioritised roadmap for remediation.

Embedding this simple process standardises how your team assesses risk—a hallmark of any professional security engagement. For a better sense of where this fits into the big picture, have a look at our guide on the phases of penetration testing.

Automating DREAD with Modern Reporting Tools

The real magic happens when you pair DREAD’s simplicity with a modern reporting platform. Manually typing out a DREAD breakdown for every single finding is tedious and a poor use of your time. Using a tool with a reusable finding library, however, completely changes the game.

Imagine you've created a finding for "Cross-Site Scripting (Reflected)" in a platform like Vulnsy. You can build the DREAD score directly into that reusable template. The next time you discover that same vulnerability on an engagement, you just import the entire finding—description, remediation steps, and the DREAD breakdown—with a single click.

This approach eliminates countless hours of repetitive copy-pasting and formatting. The image below shows just how cleanly a DREAD score breakdown can be integrated into a finding using a modern pentesting workspace.

A close-up view of a modern workspace with a laptop, a document, and a pen on a wooden desk.

As you can see, the scores sit right there alongside the vulnerability details, making the risk justification instantly clear and easy for anyone to understand.

Mapping DREAD to Report Severity

The final piece of the puzzle is translating the numerical DREAD score into a qualitative severity rating that your clients will understand at a glance—like Critical, High, Medium, or Low. This is crucial for clear communication.

Here’s an example of how you can present this in a finding:

Finding: SQL Injection in Login Form

  • Severity: Critical
  • DREAD Score: 9.6

DREAD Breakdown

  • Damage (10/10): A successful exploit leads to a full database compromise, exposing all user data.
  • Reproducibility (10/10): The exploit works reliably, every single time.
  • Exploitability (9/10): The attack requires only publicly available, common tools.
  • Affected Users (10/10): All application users, including administrators, are impacted.
  • Discoverability (9/10): The vulnerability is on a primary, unauthenticated endpoint.

This clear, structured breakdown immediately backs up the "Critical" rating. It helps clients quickly grasp not just what the problem is, but why it needs their immediate attention. By integrating the DREAD risk assessment model into an automated workflow, you can deliver higher-quality reports, faster and more consistently.

Your Questions About the DREAD Model Answered

Even with a solid grasp of a framework, you're bound to have questions when you start applying it in the real world. Let's tackle some of the most common queries that pop up when pentesters first get their hands on the DREAD model.

Is the DREAD Model Still Relevant Today?

Absolutely. While it’s true that its creator, Microsoft, no longer officially uses it, DREAD’s simplicity and speed keep it very much in the game. It excels at rapid risk triage, making it a fantastic tool for quickly sorting through findings during an engagement or for internal prioritisation meetings. It's especially useful for explaining high-level risks to non-technical stakeholders who don't need the granular detail of a framework like CVSS.

What Is the Biggest Weakness of the DREAD Model?

Its greatest weakness is definitely its subjectivity. Ask two different testers to score the same vulnerability, and you'll likely get two different ratings based on their personal experience and perspective.

Another significant criticism centres on the maths. The averaging formula can accidentally hide a major problem. For example, a catastrophic Damage potential could be watered down by low scores in other areas, making a critical business risk seem far less urgent than it actually is.

When Should I Use DREAD Instead of CVSS?

Think of DREAD as your go-to for quick, on-the-fly risk assessments. It's perfect for initial prioritisation talks within your team or for explaining the potential business impact to leadership in simple terms.

You should switch to CVSS when you need a standardised, objective, and technically detailed score. CVSS is the industry standard for a reason; its scores are recognised everywhere and can be compared across different tools and platforms, which is essential for things like formal compliance reporting.

How Can I Make My DREAD Scoring More Consistent?

Consistency is key, and the best way to achieve it is by creating a dedicated internal scoring guide.

Define what a "Low," "Medium," and "High" score looks like for each of the five components. Use specific, concrete examples that are relevant to your organisation or the clients you typically work with. Holding regular calibration sessions where the team scores vulnerabilities together is also a great way to get everyone aligned.


Accelerate your reporting and deliver consistently professional results on every engagement. Vulnsy replaces hours of manual formatting with automated, brandable templates and a reusable finding library. Start your free trial at Vulnsy.com.

dread risk assessment modelrisk assessmentpentestingvulnerability managementcybersecurity reporting
Share:
LT

Written by

Luke Turvey

Security professional at Vulnsy, focused on helping penetration testers deliver better reports with less effort.

Ready to streamline your pentest reporting?

Start your 14-day trial today and see why security teams love Vulnsy.

Start Your Trial — $13

Full access to all features. Cancel anytime.