Jeffrey Groman Jeffrey Groman

Actionable Threat Intelligence, with AI

Most organizations receive CISA advisories, vendor security bulletins and industry alerts. Few have a clear process for turning that information into action. AI tools have changed what's possible for teams without a dedicated analyst.

A CISA advisory drops about a ransomware group actively targeting small financial institutions. The IT manager reads it, recognizes the group name from recent news, and isn't sure what to do next. The advisory lists TTPs along with some random IOCs. Without a process to contextualize it, the IT manager places it into a folder and nothing happens.

That gap between receiving threat information and acting on it is what threat intelligence programs are built to address. For most organizations, the barrier has been one of limited capacity. Turning raw threat data into usable guidance takes expertise and time that many teams don't have. That constraint has started to shift.

More Than a Feed

Threat intelligence is associated with fancy tech platforms and six-figure tooling budgets. In practice, it is any information that helps you make better security decisions. Most organizations receive raw intel already. CISA's Known Exploited Vulnerabilities catalog is updated daily with new CVEs. The FBI publishes flash alerts for active threats. MS-ISAC produces advisories tailored to state, local, tribal, and territorial entities. If your industry has an ISAC, there is likely a steady stream of sector-specific alerts coming through it.

The problem is not access to information. It is the gap between raw data and actionable context. Large organizations have analysts who read threat reports and translate them into risk language: which systems are affected, which tactics map to existing controls, and which indicators should be pushed to detection tools. That translation step is where most of the value is generated. Without someone to do it, the report sits unread or gets forwarded to a folder that nobody reviews.

The Translation Problem

AI tools are well-suited to the translation step. They can summarize dense technical content, reframe it for a specific environment or audience, and produce structured output from unstructured input. That is not intelligence analysis in a formal sense. It is closer to triage assistance. But for organizations without dedicated analysts, that is often the capability that is missing.

The practical outcome is a shorter path from "here is a threat report" to "here is what it means for us." AI tools don't replace judgment about what to prioritize or how to respond, but they can reduce the labor cost of getting there. Moreover, this doesn’t require proprietary threat feeds or expensive tooling. Public sources provide plenty of raw material. A general-purpose AI tool is enough to start extracting value from what is already available. Use cases that consistently produce useful output include CVE triage, threat actor profiling, IOC formatting, detection query drafting, and executive summarization.

Putting It to Work

Here are four specific tasks that are accessible to most organizations today.

CVE Triage

When a new CVE comes out, the entry includes a description, affected products, and sometimes remediation guidance. What it doesn't include is context for your specific environment. Paste the entry into an AI tool and ask a direct question: "We run [specific application and version]. Does this vulnerability affect us, and what should we check for?" The result is a plain-language risk assessment tailored to your stack rather than a CVSS score without context. This works for NVD entries as well, particularly for patches where the severity rating doesn't fully reflect exploitability in your environment.

Threat Actor Contextualization

When a threat group makes headlines for targeting your sector, take a public report, whether that is a CISA advisory, a Mandiant write-up, or a CrowdStrike blog post, and ask an AI tool to summarize their known tactics and identify which of those tactics your current controls are designed to address. This can provide a rough gap analysis in a fraction of the time a manual review would take.

IOC Processing

Public threat reports and advisories routinely publish IOCs like IP addresses, file hashes, and domains associated with known threat actors. Searching for those indicators in your SIEM is straightforward and any hits should lead to full threat hunting exercise if not an incident response activation.

Executive Summarization

Security bulletins are written for technical audiences. They assume familiarity with terminology, architecture concepts, and the significance of specific control gaps. Leadership audiences need a different version: what happened, who is at risk, what the business impact could be, and what decision is being asked of them. AI can translate a technical advisory into a three-paragraph summary suitable for a leadership briefing, with business impact language and a clear recommended action plan.

Where to Start

The sources are free. CISA's Known Exploited Vulnerabilities catalog is public and searchable. MS-ISAC advisories are available to members at no cost. FBI flash alerts are distributed broadly. Sector-specific ISACs like FS-ISAC for financial services and H-ISAC for healthcare publish member-facing intelligence on a regular cadence.

A practical starting workflow: once a week review new CVEs. For any entry that matches software your organization runs, put it through an AI tool with your environment context included in the query. This process can even be automated. Flag anything that applies to systems you own and bring it into your patching process or risk register.

Keep in mind that specificity matters. An AI query that names your specific technology stack produces more relevant output than a generic question. The more context you give, the more useful the response. The goal is not a full threat intelligence program on day one. It is a repeatable habit that builds context over time and eventually becomes part of how your team operates.

Develop a Process

Organizations that start small with threat intelligence, even a weekly review of public sources, develop a sharper picture of their risk environment over time. The interest compounds. The team gets better at asking useful questions. The outputs become more tailored to the actual environment. That is how programs mature. Groman Cyber helps organizations build structured threat intelligence workflows, develop reporting processes, and integrate intelligence into their existing security programs. Contact us to get started.

Read More
Jeffrey Groman Jeffrey Groman

The Access that Outlived the Employee

Most organizations have more active accounts than they realize, spread across Active Directory, cloud platforms, and dozens of SaaS applications that each manage access independently.

A long-tenured employee leaves. IT disables their Active Directory account the same day. But their Salesforce login still works. Their Slack account is still active. They were one of three people who knew the credentials to a shared AWS console account, and nobody changed that password. Six months later, an auditor asks who has access to your CRM. Nobody has a clean answer. That moment is what identity security programs are built to prevent.

Why Identity Is Hard to Manage Now

Identity sprawl mirrors data sprawl. SaaS tools, cloud platforms, and internal systems maintains their own accounts. There is no single place where all of an organization's active identities live, and no automatic process to keep them synchronized.

On-premises Active Directory covers the Windows environment and domain-joined systems. Azure AD (now called Entra ID) covers Microsoft cloud services. AWS has its own IAM. Google Cloud has its own identity layer. Salesforce, GitHub, Slack, and every other SaaS application manages its own users independently. Your identity environment is not one thing; it is a collection of disconnected systems that do not talk to each other by default.

Provisioning happens fast. Someone needs access to do their job, so access gets granted. Deprovisioning is pain-staking and often less consistent, because it’s rare to find a complete list of access granted across all systems. The result is orphaned accounts: former employees, contractors, or vendors whose access was never fully revoked.

Unfortunately, this is not hypothetical. Many organizations find they have active accounts belonging to past employees and contractors who left months or years ago.

Attackers have taken notice. Credential-based attacks have grown precisely because they’re effective. Organizations have strengthened perimeter defenses, so attackers went around them. They log in using compromised credentials, often into accounts that should have been removed long before the breach.

What Identity Security Actually Covers

Organizations that want to get their arms around identity security build a multi-tiered program something like this:

Account inventory: You need a working list of every account that exists across every system. Who created it, when, and why. This is harder than it sounds in a mixed environment of on-premises Active Directory and cloud platforms, but it is the foundation. Without an inventory, you cannot run a meaningful access review, assess your privileged account exposure, or produce evidence for an audit.

Authentication controls: Multi-factor authentication on every external-facing system is a baseline expectation now, not a bonus feature. If staff are accessing systems from outside the network, MFA should be required. Single sign-on reduces the number of independent credential sets in play and makes deprovisioning cleaner. When someone leaves, disabling their SSO identity cascades across connected applications rather than requiring separate action in each one.

Least privilege: Users should have access to what they need to do their job, not everything that might be convenient. In practice, access accumulates over time. Someone gets temporary elevated access for a project and nobody removes it when the project ends. An access review finds and corrects that drift before it creates exposure.

Privileged access management: Administrative accounts in Active Directory, root-level access in AWS, and super-admin accounts in Salesforce or GitHub carry disproportionate risk. A compromise of a standard user account is serious. A compromise of an admin account is a different category of problem. Privileged accounts need tighter controls, separate credentials, and closer monitoring than standard accounts.

Identity lifecycle management: What happens when someone joins, changes roles, or leaves? If that process is informal or incomplete, orphaned and over-privileged accounts are the predictable result. Lifecycle management is not a technical control; it is a process that connects HR events to IT actions.

Each of these is connected. SSO makes account inventory easier. Least privilege policies are only meaningful if lifecycle management keeps them current. The components reinforce each other.

Where to Start

Start with privileged accounts. Admin accounts in Active Directory, root and admin accounts in your cloud environments, and super-admin accounts in your most sensitive SaaS tools are where a compromise causes the most damage. Inventory those first, then audit who has them and whether each one is still warranted. Remove or restrict anything that is not clearly necessary.

Enforce MFA on every external-facing system. Email, VPN, cloud consoles, SaaS applications: if your staff are accessing these systems from outside your network, MFA should be required. This is one of the highest-return actions an organization can take relative to the effort involved.

If you haven’t done so yet, start conducting access reviews. Pull the user list from Active Directory, your primary cloud platform, and your most sensitive SaaS tools, then compare those lists against your current employee roster. The accounts that do not match your active staff are your starting point. If you already have a SOX requirement for access reviews, expand beyond the SOX in-scope systems.

Build an offboarding checklist that reflects reality. When someone leaves, what systems actually need to be touched? Document the list of identity stores. Test it on the next departure and update it when you find gaps. This is the most direct way to reduce orphaned account risk over time.

Connect HR and IT processes. Most identity lifecycle problems happen at the boundary between those two functions. If IT does not hear about a departure or a role change in time, accounts persist. That connection needs to be deliberate. A shared process, a notification trigger, or a regular sync is better than assuming information will flow on its own.

Identities Are the Front Door

Identity is not purely a technical problem. It is an organizational one. Controls only work if the right processes exist around them. Requiring MFA won’t help if shared accounts exist on critical systems. An access review is only useful if action is taken on the findings.

The organizations that manage identity well are not necessarily the ones with the best tools. They are the ones with clear ownership of who can access what, and a reliable process for keeping that current. Identity security is not a one and done game. Access changes every time someone joins, leaves, or moves to a different role. The program has to keep pace with the organization.

Groman Cyber helps organizations assess and improve identity security across on-premises and cloud environments. If you are not sure where your exposure is, or if your offboarding and access review processes have gaps, we can help. Contact us to get started.

Read More
Jeffrey Groman Jeffrey Groman

The Data Hiding in Your Organization

Most organizations collect more data than they realize, store it longer than they should, and protect it less consistently than they think.

Imagine if an auditor, regulator, or lawyer asks your team a simple question: "Where is your sensitive data stored?" Nobody gives a confident answer. Different people point to different systems. Someone mentions a shared drive that may or may not still be in use. That moment of uncertainty is exactly the gap a data management program is designed to close.

The Problem With Unmanaged Data

Data sprawl is the default state for most organizations. Cloud storage, SaaS tools, email, shared drives, and legacy systems all accumulate data over time, and keeping a central accounting of it is a challenge. That is not a failure of diligence. It’s what happens when organizations grow faster than their ability to maintain appropriate data hygiene. The problem is that unprotected data is a liability, not just an asset. You cannot apply the right controls to data you have not cataloged. If you do not know where sensitive information lives, you cannot protect it, restrict access to it, or monitor it effectively.

There is also a compliance dimension. HIPAA, GDPR, CCPA, and SOC 2 all require knowing what data you hold, where it lives, and documented protections that are currently in place. If you cannot answer those questions, you cannot produce audit evidence. The absence of an inventory can also be a finding.

Retention risk is another piece of this. Data you no longer need but still hold is still data that can be breached, subpoenaed, or misused. Keeping everything indefinitely is not a strategy; it is accumulated risk. Attackers often find the forgotten data. The legacy system nobody decommissioned. The shared folder with permissions that were never tightened. That forgotten data is frequently the path of least resistance.

What is a Data Management Program?

A data management program is not monolithic. It is a set of connected practices that give your organization visibility and control over its information. Data management practices include:

  • Data inventory: What data do you have, where does it live, and who owns it? This is the foundation everything else builds on. Without an inventory, the rest of the program has nothing to work from.

  • Data classification: Not all data carries the same risk. A simple four-tier model (public, internal, confidential, restricted) lets your team apply the right level of protection to the right data, without treating everything like a state secret.

  • Retention and disposal: How long do you keep each type of data, and how do you get rid of it when you no longer need it? Deletion is not just a hygiene practice; it is a risk-reduction measure.

  • Access controls: Who can reach which data, and is that access still appropriate? Many organizations find that access granted during onboarding never gets revisited. Accounts accumulate permissions over time, and nobody goes back to check.

  • Roles and ownership: Every data set needs someone accountable for its handling. Data without a named owner tends to drift, pile up, and get forgotten.

These five components are not independent. Classification informs retention schedules. Inventory informs access controls. Ownership keeps all of it from becoming theoretical. The whole thing works better as a connected program than as a set of isolated policies.

Avoid Boiling the Ocean

A common mistake organizations make when starting a data management program is trying to do everything at once. Cataloging every file in the organization on day one is not realistic, and attempting it often means the project stalls before it produces anything useful.

A better approach is to pick a lane. Start with customer data, financial records, employee data, or intellectual property. Build your inventory there, establish classification and ownership, and get that slice of the program working before you expand. Start with your most sensitive data first. These are the data sets where a visibility gap carries the most consequence. If a breach happens, you want to know exactly what was exposed and who owned it.

Classification drives action. Once you know what you have and how sensitive it is, the right controls become clearer. Classification is not bureaucracy; it is a decision-making tool. It is how your team knows whether a given piece of data needs encryption, strict access controls, or both. Assign owners early, even informally. Someone should be able to answer questions about each data set. If nobody can, that is a flag worth addressing before an audit surfaces it.

Write policy around real-world behavior. If your retention schedule says three years but people store files indefinitely, the policy is unrealistic. Effective policy reflects what the organization can actually enforce, not what sounds reasonable in a document.

Start With Visibility

You do not need a perfect program on day one. You need a start. The organizations that handle data well are not the ones with the most sophisticated tools. They are the ones that know what they have, who owns it, and what happens to it over time. That knowledge does not come from tools alone; it comes from building the practices that keep the inventory accurate and the ownership clearly defined.

A data management program is not a one-time project. It requires ongoing ownership, periodic review, and updates as the organization changes. People move roles, systems get added, vendors change. The program has to keep pace.

Groman Cyber helps organizations build data management programs that are practical, auditable, and built to last. If you are not sure where to start, or if your current approach has gaps you cannot fully account for, we can help. Contact us to get started.

Read More
Jeffrey Groman Jeffrey Groman

Can an IRP be useful?

According to IBM research, having an incident response plan and a trained team can reduce the cost of a breach by almost $500,000. Isn’t it time to take your incident response plan off the shelf?

It's 2 AM on a Friday when one of your engineers texts you: something unusual is happening on the network. By the time you're on a call with three people who all have different opinions about what to do next, thirty minutes have passed and nobody has actually done anything. That's exactly what an incident response plan is designed to prevent.

A useful plan changes everything

When a security incident hits, the pressure is immediate. Systems may be down, data may be at risk, and stakeholders are asking questions you don't have answers to…yet. In that setting, improvisation is expensive. People guess at priorities, duplicate effort, or worse, take actions that make containment more difficult. Small incidents become big ones not because teams are incompetent, but because they're making decisions under stress without a shared playbook.

A documented incident response plan can go a long way. It can inform your team who does what, in what order, and how decisions should be made before anyone is under pressure.

There's also a compliance dimension. Frameworks like HIPAA, PCI DSS, SOC 2, and NIST all require organizations to have a documented incident response process. If your company is subject to any of these, not having a plan isn't just a gap in your security program, it's a gap in your audit evidence. And auditors will ask for it.

Cyber insurance is another driver that's grown more significant in recent years. Carriers increasingly ask about incident response maturity when underwriting policies and at renewal. A documented, tested plan can affect what coverage you're eligible for along with the premium you’ll pay. A plan that lives only in someone's head doesn't satisfy an underwriter.

Finally, there's the reputational cost. The organizations that come out of incidents with their customer relationships intact are almost always the ones that responded quickly and communicated clearly. A slow, disorganized response — even to a relatively contained incident — signals to customers and partners that you're not in control.

The plan doesn't prevent incidents from happening. It determines how fast and cleanly you get out of one.

Putting it together in practice

An incident response plan isn't something you read once and file away. It lives in three states: resting, activated, and post-incident. In the resting state, the plan is documented, distributed, and kept current. Your team knows it exists, understands their role in it, and it gets reviewed on a regular cycle. Nothing is happening, but you're ready.

When an incident is declared, the plan is activated. The right people are notified, roles are assumed, and work begins according to a defined process. This is where the preparation pays off: nobody is waiting to be told what to do.

After the incident is resolved, you enter the post-incident phase. The team documents what happened, what worked, what didn't, and what needs to change. That review makes the next response better.

Part of what makes a plan operational is severity classification. Not every alert requires the same response. A well-structured plan defines which situations call for the full incident response team and which the security team can handle on its own. That distinction keeps your organization from treating every notification like a crisis, and from treating an actual crisis like a routine ticket.

Defined roles are another cornerstone. In a real incident, your Incident Commander coordinates the overall response and makes the calls on escalation and communication. Your Technical Lead investigates the scope of the incident and drives containment. Your Communications Lead manages what gets said to stakeholders, customers, and regulators, and when.

That last piece matters more than people realize. Regulatory notification windows are tight. HIPAA breach notification, for example, requires notifying affected individuals within 60 days of discovery. If your Communications Lead doesn't know what they're authorized to say, or your Incident Commander doesn't know the deadline, you can add a compliance violation on top of an already bad day.

A mature plan also connects to the rest of your business continuity posture. If a cyber incident threatens operations, it should trigger your Business Continuity Plan. The two plans should hand off cleanly so nothing falls through the gap between "security incident" and "business disruption."

Muscle memory

A documented plan is a starting point, not a finish line. The organizations that respond well to incidents have practiced their plan repeatedly, and across different scenarios. Tabletop exercises are the most accessible way to do this. You walk your team through a scenario: ransomware locking down a critical system, a credential breach exposing customer accounts, a vendor suffering an attack that reaches your environment, all without touching live systems. The goal isn't to pass a test. It's to find the gaps before an attacker does.

These exercises consistently reveal the same kinds of problems: unclear decision authority, contact lists with outdated numbers, ambiguity about who owns external communication. Better to find those in a conference room than at midnight in the middle of a crisis.

Regarding contact lists: test your communications tree. Don't assume the list is accurate because someone built it six months ago. Run a brief drill where you actually try to reach people using the documented contacts. You'll often find a number that's changed, a backup contact who doesn't know they're a backup, or a vendor liaison who left the company.

Annual review is not optional. Your organization changes: people move roles, vendors change, systems are added or retired. And the threat landscape is changing as well. Assign a specific owner to review the plan every 12 months and after any real incident, however minor.

After-action reviews are where the best learning happens. Even a low-severity incident like a phishing attempt that was caught and contained produces useful information if someone takes the time to write it down. What triggered the alert? How long did detection take? What would have happened if it had gone further? A brief lessons-learned document closes the loop and makes your next response faster.

Work on it now, before your next incident

A plan that nobody has read, practiced, or taken ownership of isn't an incident response plan — it's an ignored document in the back of a file drawer. The companies that handle incidents well aren't lucky. They prepared. That preparation includes looking at assigned roles, practiced scenarios, tested communications, and a plan that gets reviewed and updated rather than left to drift. It also means leadership treats incident response as an ongoing capability, not a checkbox.

If your organization doesn't have an incident response plan yet, or has one that hasn't been tested, Groman Cyber can help. We work with organizations to build plans that are practical and usable, run tabletop exercises that surface real gaps, and support teams through the review process after incidents occur. Contact us to get started by clicking here.

Read More