How security Professionals can actually use LLMs effectively

Configurare noua (How To)

Situatie

Large language models (LLMs) like ChatGPT, Claude, and development tools such as Cursor have quickly become part of many engineering and security workflows. Yet many security professionals still struggle to get meaningful results from them. The problem is rarely the tool itself. It is usually how the tool is being used.

Many people interact with LLMs the same way they use search engines: short queries, minimal context, and the expectation of a quick answer. That approach works poorly for complex security work. Security analysis depends on context, constraints, systems thinking, and domain knowledge. Without those elements, the model will default to generic answers.

Used properly, however, LLMs can accelerate research, system design, documentation, threat analysis, and engineering tasks. The following techniques outline practical ways security analysts, threat hunters, SOC analysts, and security engineers can structure their prompts to get much higher-quality output.

Solutie

1. Start With Role-Stacking

One of the most effective ways to improve responses from an LLM is to define the perspectives it should reason from. This technique can be described as role-stacking.

Instead of asking a simple question, define the roles the model should simulate while answering. This helps the system combine multiple disciplines instead of staying confined to one domain.

For example:

As a security analyst specializing in phishing detection.
As a software engineer experienced with Python, Flask, and Docker.Create a simple Flask application that collects DNS, WHOIS/RDAP,
HTML, and other open-source threat intelligence for a given URL.

This prompt instructs the model to think like both a security practitioner and a developer. The result is typically far more useful than simply asking for a “Flask application.”

The same approach can be used in operational security environments.

Example for SOC analysts:

As a SOC analyst experienced in alert triage.
As a threat hunter familiar with MITRE ATT&CK.
As someone who has dealt with alert fatigue in production environments.Analyze this detection rule and identify potential blind spots
or sources of false positives.

Stacking roles encourages the model to evaluate the problem from several angles simultaneously. This often surfaces design issues, detection gaps, or operational challenges that might otherwise be missed.

2. Be explicit about your Technology Stack

LLMs perform significantly better when they understand the environment they are working within. Security infrastructure varies widely across organizations, and vague prompts lead to vague recommendations.

Providing explicit context about available technologies allows the model to reason within realistic constraints.

Example:

As a senior security engineer and software engineer experienced with Docker, Kubernetes, caching systems, data streaming, gRPC, protobuf, JSONSchema, and common security data models.

Also familiar with Puppeteer, Playwright, Chromium automation, Selenium, network captures, monitoring pipelines, anti-bot detection, cookie injection, and browser fingerprinting.

Using these technologies, design an application that analyzes phishing URLs and collects supporting threat intelligence.

This prompt defines the available tools and capabilities. The model now understands what technologies are acceptable for the solution. Instead of proposing hypothetical tools that may not exist in the environment, it is more likely to design around the technologies that are actually available.

Threat hunters can apply the same approach when working with detection infrastructure.

Example:

As a threat hunter with access to Splunk, CrowdStrike EDR,
network flow data, and Windows telemetry.

Operating in a hybrid cloud environment with AWS and on-prem
Windows infrastructure. Familiar with Sigma rules and MITRE ATT&CK.

Help develop a hunting hypothesis for detecting lateral movement
through compromised service accounts.

The more accurately the environment is described, the more realistic and actionable the response becomes.

3. Request Depth Explicitly

By default, LLMs often produce short answers designed to satisfy a basic query. For security work, those answers are often too shallow.

Explicitly requesting deeper reasoning significantly improves output quality.

Common instructions that help include:

  • Take time to reason through the problem carefully

  • Use critical systems thinking

  • Consider edge cases and operational constraints

  • Validate assumptions before making recommendations

Example prompt:

Using your knowledge of phishing defense and threat intelligence,
we are evaluating several intelligence providers to integrate
into a detection pipeline.

Products under evaluation:
– VirusTotal GTI
– Team Cymru
– Feedly
– any.run

Create an evaluation framework for comparing these solutions.

Take your time. Think through carefully. Validate assumptions
and avoid hallucinating unsupported claims.

Explicit instructions like these often encourage the model to structure its reasoning and provide more detailed analysis rather than a quick summary.

This approach works particularly well when evaluating tools, designing detection strategies, or planning system architecture.

4. Encourage Current Context

Security tooling, threat actor techniques, and defensive strategies evolve quickly. When relevant, prompts should explicitly request recent information.

Example:

Search for the latest developments in phishing sandbox analysis tools.

Evaluate which platforms are currently most effective for analyzing
phishing links and malicious attachments from email campaigns.

Even when the model does not have perfect real-time data, prompting it to consider recency encourages more careful reasoning about industry trends and current tooling.

This becomes even more powerful when the LLM is paired with systems that include web search or external knowledge retrieval.

5. Frame Problems as Systems

Security problems are rarely isolated tasks. They are almost always systems problems involving data pipelines, infrastructure limitations, scale, and operational tradeoffs.

When prompts describe systems rather than individual tasks, the resulting analysis tends to be much more useful.

Example:

Imagine a detection platform that can hunt backward through
30 days of telemetry using indicators.

The system receives approximately 2 million new indicators per day
across domains, IP addresses, URLs, and file hashes.

As time passes, the dataset grows exponentially.

Provide five architectural approaches for storing and processing
this data for several years while maintaining reasonable query speed.
Use critical systems thinking.

This type of prompt forces the model to reason about:

  • data growth

  • indexing strategies

  • storage systems

  • distributed processing

  • cost and retention tradeoffs

The output typically resembles an architecture review rather than a simple feature suggestion.

SOC environments can benefit from this style as well.

Example:

Our SIEM generates approximately 50,000 alerts per day
across 200 detection rules.

Five rules generate 60% of the alert volume and consume
approximately 15% of analyst time.

How should we approach optimizing this situation?

Consider both short-term tactical improvements and
longer-term detection engineering strategies.

The model is now solving an operational systems problem instead of merely explaining alert fatigue.

6. Generate Reusable Project Definitions

LLMs can also be useful for documenting and structuring future work. After completing a project, it can be valuable to generate a structured project definition that can be reused later.

For example:

Assess this project as a whole and generate a markdown definition
that can be used as a template for similar projects in the future.

Consider both current LLM interaction patterns and how workflows
may evolve as AI tooling becomes more integrated into engineering
environments.

Produce the document in a format that is clear, structured,
and optimized for future reuse.

This approach effectively converts the work that was just completed into a reusable blueprint. The resulting documentation can accelerate future projects and ensure consistent architecture patterns.

These definitions often include:

  • project goals

  • architectural assumptions

  • expected inputs and outputs

  • integration points

  • data schemas

  • example queries or workflows

Over time, this becomes a library of reusable project frameworks.

Practical Starting Points

For security professionals who are just beginning to integrate LLMs into their workflows, a few habits consistently improve results.

Use role-stacking
Define the expertise you want the model to simulate.

Provide detailed context
Include tools, platforms, and constraints in the prompt.

Request deeper reasoning
Ask for structured analysis instead of quick answers.

Describe systems instead of tasks
Frame the problem in terms of scale, architecture, and operational impact.

Iterate on prompts
The first prompt rarely produces the best result. Refining the question usually improves the output significantly.

Human Judgment Still Matters

LLMs are best viewed as amplifiers, not replacements.

They can accelerate research, help draft architecture plans, analyze code, and explore technical ideas quickly. They can also generate incorrect or misleading information with confidence.

The value of experienced security professionals remains unchanged: judgment, pattern recognition, and the ability to recognize when something does not look right.

Used correctly, these tools reduce the time spent on repetitive work and increase the time available for deeper analysis and decision-making.

Security work ultimately protects real organizations and real people. LLMs can assist in that mission, but they should remain tools under human direction.

The techniques outlined here represent practical patterns that many security engineers and analysts are already using successfully. As AI systems continue to evolve, these workflows will likely expand into more advanced automation and agent-driven architectures.

Many practitioners are already experimenting with:

  • AI-assisted detection engineering

  • automated investigation workflows

  • threat intelligence enrichment pipelines

  • LLM-driven research assistants

  • state-machine-based security agents

The security landscape continues to accelerate. Tools that improve analysis speed and expand thinking capacity will become increasingly valuable. LLMs are not magic solutions, but when used thoughtfully they can become powerful collaborators in the day-to-day work of defending systems.

Tip solutie

Permanent

Voteaza

(2 din 3 persoane apreciaza acest articol)

Despre Autor

Leave A Comment?