What Is Penetration Testing With Example?

You've probably seen this pattern already. A client says they “run scans regularly”, the firewall is in place, the cloud estate is “locked down”, and everyone assumes the basics are covered. Then a real test starts, and within hours you find an exposed admin panel, weak access control in a customer portal, or a forgotten service that gives you a path deeper into the environment.
That's why people search for what is penetration testing with example. They don't want a textbook definition. They want to know what happens during a test, why it matters to a business, and how the technical work turns into something a manager can act on.
A good penetration test isn't just about landing an exploit. It's a controlled exercise that answers practical questions. Can an attacker get in? How far could they get? What's the actual business impact? And just as important, can the team explain the findings clearly enough that someone fixes them?
What Is Penetration Testing and Why Does It Matter
Penetration testing is an authorised, simulated cyberattack against a system, application, or network. The point isn't to create chaos. The point is to identify weaknesses that a real attacker could exploit, then prove the risk in a controlled way before somebody malicious finds the same path.
A simple way to explain it to non-technical stakeholders is this. You hire a specialist to try to break into your own property, with rules, permission, and boundaries, so you can fix the weak doors before a burglar tries them for real.
It's more than a scan
A vulnerability scanner tells you what might be wrong. A penetration test tells you whether those weaknesses can be used, chained together, and turned into access, data exposure, or control over a system.
That distinction matters. Many environments look acceptable on the surface because automated tools return a list of findings and someone closes the ticket. A pentester goes further. They test assumptions, look for attack paths, and validate impact.
Practical rule: If your security activity stops at “the scanner found nothing severe”, you still don't know how your environment behaves under attack.
The urgency isn't theoretical. In the UK, the National Cyber Security Centre reported that cyber incidents affecting essential services rose by 17% in 2024, and 68% of breached organisations said they had not conducted a penetration test in the prior year according to this UK-focused penetration testing statistics summary.
Why businesses use it
Most buyers don't commission a pen test because they want a hacker's report for its own sake. They want answers to business questions such as:
- Can we trust this internet-facing application: Especially before launch, after a major release, or before onboarding large customers.
- Are our controls functioning: MFA, segmentation, access control, WAF rules, and monitoring often look stronger on paper than they are in practice.
- What should we fix first: A good test helps teams prioritise real exploit paths over generic noise.
- Will the report stand up to scrutiny: Security managers often need evidence for internal governance, insurers, customers, or compliance reviews.
If you want a broader business view, why penetration testing is important is usually best understood as risk reduction through evidence, not as a box-ticking exercise.
A useful mindset
The best engagements treat penetration testing as decision support. The exploit matters, but the outcome matters more. If a test shows that a low-privilege user can reach sensitive data, or that a public web app can be used as a stepping stone into internal systems, its core value is the clarity that follows. You now know where the weakness is, how it was reached, and what needs to change.
Goals and Types of Penetration Tests
Some clients ask for “a pentest” as if there's only one kind. There isn't. The right engagement depends on what you need to learn, how realistic the simulation should be, and how much context you're willing to give the tester.

What a test is trying to achieve
A mature buyer usually has one or more of these goals in mind:
- Validate exposure: Can someone on the internet reach something they shouldn't?
- Assess likely attacker paths: If one control fails, what's next?
- Support compliance work: Some organisations need evidence that testing has been carried out and findings have been documented properly.
- Check remediation quality: After fixes, teams want proof that the original weakness is gone and no easy variant remains.
- Measure risk in business terms: A finding matters more when it's tied to records exposed, privileges gained, or systems reached.
Different test types support these goals in different ways.
Penetration Testing Approaches Compared
| Approach | Tester's Knowledge | Best Simulates | Pros | Cons |
|---|---|---|---|---|
| Black box | Little to no prior knowledge | An external attacker starting cold | Most realistic for internet-facing exposure, shows what can be found from the outside | Slower discovery, less depth in a fixed time window |
| White box | Full or extensive knowledge, such as architecture or code access | A highly informed adversary or deep assurance review | High coverage, efficient use of time, strong for finding hidden logic flaws | Less realistic as a pure outsider scenario |
| Gray box | Partial knowledge, often a user account or limited design context | An attacker with some foothold or leaked knowledge | Balanced realism and efficiency, useful for role-based access testing | Can miss issues outside the provided vantage point |
How to choose the right one
Black box is useful when the main concern is external exposure. If a client wants to know what a real outsider can discover from public information and internet-facing assets, this is often the right fit. The trade-off is time. Testers spend more of the engagement learning the environment before they can attack it properly.
White box works well when depth matters more than surprise. If a team needs a thorough review of a critical application, source code access, design documents, and privileged insight make the test more efficient. You lose some realism, but you gain coverage.
Gray box is the practical middle ground for many commercial engagements. Give the tester a standard user account, limited documentation, or a network segment, and you can answer useful questions quickly. This is often the best option when you want to test privilege escalation, horizontal access issues, or what happens after a minor foothold.
A common mistake is choosing black box because it sounds more “real”, then expecting white box depth from the same budget and timeframe.
Match the test to the risk
A customer portal with role-based access issues often benefits from gray box testing. A crown-jewel application before a major release may justify white box depth. An organisation worried about perimeter exposure usually starts with black box.
What doesn't work is treating every environment the same. A test only delivers value when its approach matches the question you're trying to answer.
The Standard Penetration Testing Methodology
A penetration test follows a structure. Good testers may adapt their tactics as they learn more, but the engagement still moves through recognisable phases. That structure matters because each stage produces evidence, reduces guesswork, and keeps the exercise safe and useful.

Planning and reconnaissance
Disciplined teams separate themselves from reckless ones by ensuring that before any testing starts, scope, rules of engagement, test windows, sensitive systems, and points of contact are agreed upon.
Then comes reconnaissance. The tester collects information about the target, such as exposed services, technologies in use, application behaviour, user flows, and publicly available clues that may help later.
Why it matters is simple. Bad scoping causes disruption. Weak reconnaissance causes shallow results.
Scanning and gaining access
Once the tester understands the surface, they begin probing it. That may involve port and service enumeration, web application mapping, authentication testing, parameter analysis, or checking how the environment handles malformed input and edge cases.
From there, the tester attempts exploitation. This is the phase commonly imagined when pentesting comes to mind, but it should never be random. The strongest testers don't just fire tools at a target. They build and test hypotheses.
A login form may be vulnerable to SQL injection. An API may allow object references it shouldn't. A forgotten service may expose a path for remote code execution. The exploit itself is only useful if it demonstrates genuine risk.
Post-exploitation and analysis
Getting access is not the finish line. The next question is what that access means.
Can the tester escalate privileges? Reach sensitive data? Pivot to another system? Abuse trust relationships? Demonstrate impact without harming the environment?
Businesses often get their clearest answer from this process. A small technical flaw can turn out to be inconsequential, or it can become the first step in a much larger compromise path.
CREST's survey of 300 UK pentesters found that 29% of target environments had critical vulnerabilities and 44% had important ones, with an average of 0.7 critical findings per engagement, as summarised in these penetration testing statistics. That's a useful reminder that structured methodology regularly uncovers issues that matter.
Reporting is part of the methodology
The last phase isn't admin. It's where the engagement becomes useful to everyone else.
A report needs to explain:
- What was tested
- What was found
- How it was validated
- What the business impact is
- What should be fixed first
If that handoff is weak, the technical quality of the test gets wasted. Teams need evidence, remediation guidance, and prioritisation they can act on. For a more detailed breakdown of how engagements flow, the phases of penetration testing are best understood as a chain where each step supports the next.
The best pentest reports don't just prove you got in. They explain why that path existed and how to close it without ambiguity.
Penetration Testing Examples in Action
Definitions help, but examples make the work real. Two common scenarios show what penetration testing looks like in practice. One is application-focused. The other is infrastructure-focused. In both cases, the tester isn't chasing tricks for their own sake. They're trying to prove a credible path from weakness to impact.

Example one web application SQL injection
A client exposes a customer-facing web application with a login form, account pages, and an admin area that shouldn't be reachable without administrative rights. On first look, nothing appears unusual. The form returns clean error messages. The application uses HTTPS. Basic scanning doesn't immediately show severe issues.
A tester starts manually interacting with the login request. They review parameters, response differences, session handling, and how the application behaves when input is malformed. One parameter starts responding oddly when special characters are introduced.
That leads to controlled testing for SQL injection. A classic payload such as ' OR 1=1-- is the kind of example often used to illustrate how unsafe query handling can alter application logic if input isn't properly sanitised. The point isn't the string itself. The point is what the application does with it.
If authentication logic breaks and the tester gains unauthorised access, the next step is to validate impact carefully. Can they access another user's data? Can they enumerate records? Can they reach administrative functions?
A solid proof of concept usually includes:
- The vulnerable request: Enough detail for the developer to reproduce the flaw
- The observed behaviour: How the application responded differently
- Evidence of impact: Screenshots or captured output showing unauthorised access
- A remediation direction: Parameterised queries, input handling review, and access control checks
A proof of concept should be reproducible by the client's technical team. If they can't follow what you did, they'll struggle to fix it properly.
This kind of workflow is common in application penetration tests, especially where a scanner hints at an issue but manual verification is needed to prove whether it's real, exploitable, and dangerous.
Example two broken access control through IDOR
Not every serious web flaw is flashy. Some of the most valuable findings are simple authorisation failures.
Consider a portal where authenticated users can view profile details at a path such as /user/123. During testing, the pentester changes that identifier to another value and discovers the application returns a different user's records instead of enforcing access control. That's a straightforward example of Insecure Direct Object Reference, often shortened to IDOR.
The interesting part isn't the URL change. It's the business impact. If the records include personal details, invoices, support history, or account management options, a basic user may now be able to access data that belongs to someone else. In some portals, that same flaw appears in admin functions, export features, or billing workflows.
A good report won't stop at “IDOR found”. It will document which roles were tested, what objects were accessible, whether write actions were possible, and how broad the exposure appears to be.
Example three network access from an exposed service
A network engagement usually starts less overtly. The tester enumerates exposed hosts and services, reviews versions and banners where available, and checks for weak segmentation, default access paths, or known misconfigurations.
Suppose one server is externally reachable and hasn't been hardened properly. The tester validates whether the exposed service can be used to obtain an initial foothold. If access is gained, the important work starts after that.
They inspect what the compromised host can reach internally. Maybe it can talk to a file share, an application server, or a management interface that shouldn't be reachable from that point. Even read-only access to sensitive files can change the severity of the issue significantly.
A careful network pentest will document:
- Initial access path
- Privilege level obtained
- Internal systems reachable from that foothold
- Sensitive data or business processes exposed
- Containment and remediation recommendations
What works in these tests is disciplined validation. What doesn't work is over-claiming. If a tester can only prove limited access, the report should say exactly that. Credibility matters more than drama.
From Findings to Actionable Reports
Many outside the trade think the hard part of penetration testing is the exploit. Often it isn't. The hard part is delivering a report that is technically accurate, clear to different audiences, and fast enough to keep the engagement commercially healthy.

The reporting tax is real
A lot of pentesters still finish an engagement, then disappear into screenshots, Word templates, formatting issues, repeated finding writeups, and manual evidence handling. That work is necessary, but too much of it is waste.
Industry analysis cited in this discussion of penetration testing examples describes a significant reporting tax, with UK cybersecurity professionals often spending 30% to 40% of engagement time on manual documentation and report formatting rather than on billable testing work.
That has practical consequences:
- Testers lose momentum: Context fades while reports are being assembled manually.
- Consultancies reduce margin: Skilled security time gets consumed by formatting.
- Clients wait longer for answers: Remediation gets delayed because the report is late.
- Quality drifts between engagements: Repeated copy-paste workflows create inconsistency.
What a useful report actually contains
A strong pentest report has two jobs. It has to tell leadership what matters, and it has to tell engineers what to fix.
Executive content for decision-makers
Leaders need a concise view of the engagement. They care about attack paths, business impact, systemic weaknesses, and remediation priority. They do not need ten pages of request-and-response detail up front.
Useful executive reporting usually includes:
- Scope summary: What was in and out
- Risk overview: Which findings matter most
- Business impact statement: What an attacker could achieve
- Priority guidance: What should be addressed first
Technical detail for the people fixing it
The engineering team needs enough evidence to reproduce the issue and enough context to remediate it properly. If that detail is vague, findings bounce back, fixes get delayed, and trust in the report drops.
The technical section should cover:
- Affected asset or endpoint
- Clear reproduction steps
- Evidence such as screenshots or request details
- Impact explanation grounded in the tested environment
- Remediation guidance that is specific, not generic
Reporting quality affects remediation quality. If the writeup is weak, even a valid finding can stall in backlog for weeks.
Modern reporting is part of testing quality
The old model treats reporting like paperwork after the actual work. In practice, reporting is part of the actual work.
Teams that standardise findings, capture evidence as they go, and use structured reporting workflows usually deliver better outcomes. They spend less time fighting document formatting and more time validating edge cases, improving writeups, and reviewing impact properly.
That's the shift more teams need to make. The penetration test is not complete when the exploit lands. It's complete when the client receives a report they can use immediately.
Actionable Takeaways for Testers and Managers
For testers, the big lesson is this. Don't define good work only by technical cleverness. A clean exploit with weak evidence and vague remediation creates less value than a modest finding documented well, prioritised correctly, and tied to business impact.
A few habits make a real difference:
- Test with intent: Know what question you're answering before you start pushing at a target.
- Validate impact carefully: Prove enough to establish risk, but don't overstate what you haven't demonstrated.
- Write while context is fresh: Delayed documentation usually means weaker evidence and more rework.
- Improve communication: Clear writing is part of the craft, not an extra.
For managers, the key point is that penetration testing should produce decisions, not just a PDF. If the report doesn't tell your team what to fix first, how the issue was reached, and why it matters to operations or data exposure, you haven't extracted full value from the engagement.
Use these checks when reviewing a pentest deliverable:
- Is the scope clear: You need to know what conclusions you can safely draw.
- Are findings actionable: Your engineers should be able to reproduce and fix them.
- Is business impact explained: Severity without context isn't enough.
- Was reporting timely: Slow delivery weakens the value of time-sensitive findings.
The practical answer to what is penetration testing with example is this. It's a controlled attempt to think and act like an attacker, then turn what you learn into evidence the organisation can use. The exploit gets attention. The clarity that follows is what improves security.
If your team is spending too much time turning solid test work into messy Word documents, Vulnsy is built to remove that friction. It helps pentesters and security teams produce consistent, professional, brandable reports faster, with reusable findings, evidence handling, collaboration features, and clean DOCX exports that let you spend more time testing and less time formatting.
Written by
Luke Turvey
Security professional at Vulnsy, focused on helping penetration testers deliver better reports with less effort.

