Discover how Fortra helps agencies monitor, detect & block domain impersonation attacks with phishing prevention best practices. Watch the podcast today!
[Anthony Jimenez]
Welcome back to Carahcast, the podcast from Carahsoft, the trusted government IT solutions provider. Subscribe to get the latest technology updates in the public sector. I'm Anthony Jimenez, your host from the Carahsoft production team.
On behalf of Fortra, we would like to welcome you to today's podcast, focused around domain impersonation, lookalike domains, and phishing protection. Nick Oram, Senior SOC Manager for Domain and Dark Web Services at Fortra, will discuss safeguarding federal websites with Fortra brand protection.
[Nick Oram]
Thank you, everybody. And thanks for everyone for joining. But just to introduce myself here, my name is Nicholas Oram.
I'm the head of our dark web into the main monitoring systems here at Fortra. I've been in the industry since about 2016. And I've also worked in some other different environments like the social media service, as well as I used to head up the mobile application monitoring solution.
I wanted to start off with just talking about the domain lifecycle, and how domains are abused, showing what is a lookalike domain, and how are they created by threat actors. And probably the main goal of this presentation is going to be defending against lookalike domains, and showcasing our collection curation mitigation and monitoring processes here at Fortra, and showing why these are needed, like full protection across the attack chain. For these four different areas, we're going to show it based on the domain monitoring solution, and also the credential theft monitoring solution.
And when I say credential theft, we also, you could call it the phishing protection solution at Fortra. Then after going over that, and I will show some common threats that we're seeing on these services, what they look like. All right, just to start off, I wanted us to go over just what are the elements of a domain, since some people may not know this verbiage.
To understand how domains can be abused, it's helpful to understand a few basics in the domain lifecycle itself. Domain names give internet users an easy way to communicate and visit websites. They define an area of authority for registrants, map to IP addresses that identify a web or mail server, and originate from the domain name system root.
At the highest level of the DNS hierarchy, top level domains, also known as TLEs, are controlled by registries, and make up the end portion of every domain name. Second level domains are the next level below. These are managed by registries.
They are often selected by companies to represent a brand and establish a unique website address. Subdomains, which make up the next or third level down, are controlled by a domain's registrant and can provide a means to differentiate areas of sections of a website. And that's just a little bit of background.
Let's talk about the elements of a domain. I wanted to briefly talk about the lifecycle of a domain and the five phases of a domain. Domain names are not technically bought or sold.
Registrants pay for the right to use them for a predetermined period. Generic top level domains, like your .coms, .nets, et cetera, have a typical lifecycle made up of five phases, which we will go over briefly here. For phase one, the domain becomes available.
During this phase, anyone can register the domain from one to 10 years. In phase two, registered and active, once a registrant pays the registration fee, the domain is considered active, and they can set up hosting for a website or an email during phase two. In phase three, the expiration and renewal grace period.
If the domain is not renewed before its expiration date, a registrar will change the domain status to expired and turn off access. Phase four is the redemption period. If the registrant still does not renew within 45 days after expiration, the domain goes into a redemption period.
They still have the option to renew for 30 days if they pay a fee. And lastly, phase five is pending deletion. After the redemption period, requests to update domain information are denied.
This phase is usually five days. Then the domain is released and available for anyone to register again. So with these phases, this kind of also sheds the light on why it's important to keep track of the domains that you do register in case they do end up in phase five and deleted after an actor picks them up.
So that is obviously an important thing to keep in mind as well. So cybercriminals abuse domains by registering lookalike variations that are slightly altered from an original. Hundreds of thousands of lookalike domains are registered each year to leverage the existing trust of reputable companies, confuse customers, and make money by committing fraud.
Many attackers use similar tricks to create lookalike domains. The techniques showcased on the screen are often used to generate several variations for implementing attacks by third actors. In addition to the techniques on the screen, our detection capabilities also perform fuzzy matching to detect common known impersonations of your brands.
But if you look at the here, these are just kind of the most common ways someone could impersonate a domain. Obviously, with fuzzy matching, that kind of goes a little bit beyond these where we can detect patterns that aren't as closely related, but still could be an impersonation of your brand. So the fuzzy matching is also another critical component of what we do as well.
But research does show that lookalike domains appear legitimate for a reason. The human brain doesn't read words letter by letter. It processes the word as a whole.
This means we often overlook spelling errors that, if examined closely, would seem like nonsense. Our brains constantly anticipate what comes next using context, current input, and past experience to predict and fill in gaps. If the letters within our word are jumbled, we can still recognize the word correctly.
And this phenomenon is called as typoglycemia, which suggests that readers can understand text despite misplaced or scrambled letters. And this is kind of just showing why impersonating a domain is quite successful, because we're reading it as phishlabs.com, not with the errors. I wanted to talk about the anatomy of a lookalike domain attack.
And we have the diagram here to show the different steps. We'll chat about the steps here. Most lookalike domain threats have a common structure.
For step one, threat actors will first scout out successful brands to impersonate, then find legitimate domains the company already owns or uses. They'll use techniques to slightly modify the domain, like changing the TLD, using hyphenation, transposing, adding, or omitting letters. As they formulate new names, they will usually check for availability against the Whois database using free online search tools.
If they can't quickly find a name or decide to create a large-scale attack, a more sophisticated scammer might automate this part of the process by writing a script that generates hundreds or thousands of variations and programmatically query the Whois database to find which ones are available. Once they find their preferred names, they'll choose a registrar and register it online. Most scammers select from several registrars that are cheap or free that allow them to hide their identity.
And step two is creating a DNS record. Threat actors will then create an A record to point to the new website or an MX record for email delivery. Most web hosting companies offer domain, website, email, and DNS hosting with simple tools to add or update DNS resource records.
However, threat actors sometimes choose to use a different provider for each service. Spreading their attacks across multiple vendors adds a layer of complexity and can make takedown more difficult. To set up an attack using a website, the next step is to configure an A or address record.
As the most fundamental type of DNS record, A records map the domain or subdomain to an IP address. A quadruple A record, also known as a quad A record, is similar to an A record, yet it points to internet protocol version 6 address records. If a threat actor plans to send emails as part of their attack, they would configure an MX or mail exchanger record to indicate which mail server is responsible for setting and receiving email messages on behalf of the specified domain name.
For steps three and four, most threat actors obtain SSL certificates for their registered websites to add a layer of legitimacy. SSL certification can be anonymous, obtained at no cost, and very effective at giving an appearance of safety to the end user. Once they build a website, they'll share a link in various ways, usually via spam, SMS, blog comments, or even phishing emails.
For an email-based threat like a business email compromise, also known as a BDC scam, a scammer might visit LinkedIn or other social media platforms to find names and email addresses of company employees to use when setting up email accounts. This added step may take time, yet it significantly increases the appearance of authenticity. And just to point out too, you can also find corporate emails and third-party leaks as well as convo lists that are being posted on dark web forums, telegram channels, things like that, downloading those and using those for targeting efforts as well.
Emails might be sent from servers where the domain was registered, the website hosting provider, or a mailer program on a compromised third-party website. The goal is to increase the liberty rates and evade detection, so attackers will change tactics as often as needed. The last step in the process includes crafting emails, distributing them to targets, and waiting for the results to happen.
So that's kind of essentially the anatomy of what a lookalike domain attack can entail. It doesn't always have these different attack patterns with BDC attacks, but anytime an email is set up, it can be used for a BDC attack. And we'll get into BDC attacks later.
All right, let's see. Now, the main thing we're probably all here is, what's the most effective way to defend against lookalike domains? So to effectively protect your organization against domain impersonation, it's necessary to have a service that can deliver coverage under the facets of both collection, curation, mitigation, and also monitoring.
So I want to just go over, and this is kind of just like a general collection process across our service lines. So I just want to go over this first before getting into the specific ones for domain and phishing protection. First, collected data needs to be analyzed and enriched before you can do much with it.
The stronger collection is, the more data that needs to sift through in order to find the real threats. That's where our curation comes in at Fortra. As with collection, curation is threat-specific with a concentration of unique technology and analysis based on the type of threat.
Collected data goes through a multistage threat-specific process to weed out false positives. We mine our data using algorithms and machine learning to find what's relative to our clients. This data is analyzed using an array of threat-specific automated logic.
Those machine-analyzed results are then taken through threat-specific handling processes by our analysts that specialize in that type of threat hunting. Analyzing a potential phishing site is completely different than analyzing an executive threat that was posted on a social media site. An analyst that's trying to analyze both and more simply isn't going to be as effective as someone who's specializing in one kind of threat review.
And that is why we do specialize. Our specialists bring more context to bear and make more accurate decisions. Without a center of excellence approach to this, more of that burden would be placed upon your team.
With this managed service approach, we can do all this work on your behalf. And that's the big thing here with both our domain service and phishing protection. The four different aspects that we're going to go over is all done on your behalf.
There's nothing you have to do within that process. It's all managed by our team of analysts on the domain side and the phishing side. But if we go first, let's talk about the collection for the domain monitoring service.
Every year, cybercriminals register hundreds of thousands of lookalike domains to try to impersonate real brands. The goal of domain impersonation is to prompt interaction with a malicious email or have the user perform an ulterior motive like visiting a phishing site so the threat actor can steal personal identifiable information. By detecting threat actors targeting your brand, you can protect against a variety of online threats.
However, detecting abuse requires visibility into new and existing domain registrations and the ability to mine those registrations for brand-related keywords and also doing fuzzy matching of those keywords to find even more threats that are out there. There are a number of useful sources for domain intelligence, and they can contain some of these sources outlined below here. We do gather TLD zone files every day as every new newly registered domain comes out on a daily basis.
The DNS zone files to identify every active domain across more than 2,000 top-level domains, including both generic and country code TLDs. We gather data from Secure Socket Layer, which is SSL certificates for apparency logs present in domains, subdomains, third-level domains, fourth-level, and so on for millions of new SSL certifications issued daily. We do also analyze DNS traffic.
It contains domain names being queried and can be monitored for new domains. DNS queries can be performed using lookalike variations of legit domains to see if those lookalike variations currently exist. The domain service also monitors other DNS sourcing methods to find threats for our client base.
So, these are just some of the many sources we're using to collect data for our clients. So, we know we are, and we're always actively improving what sources we're grabbing from so we can grab even more and more data for our clients to identify these impersonation threats we're seeing. You can see all the various different aspects of here with proactive detection, our partner feeds, and also getting client feeds as well.
But with proactive detection, we're able to detect phishing when campaigns or pages get brought up in the process of being set up. A couple I just want to hone in on that you see on the screen here. One would be our web beacon.
What our web beacon is, it's a piece of code that we create that gets added to your website to see when threat actors are scraping your web page. The beacon can show early indications of a new phishing campaign and can catch it before that phishing campaign gets sent out. Our clients would get alerts if we see that piece of JavaScript showing up on that newly registered domain.
And the only way that we wouldn't see it is maybe the threat actor was able to remove that piece of code when they were scraping it. But a lot of times, we do get alerts for these newly created phishing pages through our web beacon. I know our clients really like setting this up on all their different pages to have logins on them.
And kind of similar to the domain service, the phishing service is also collecting SSL certs. This is capturing both lookalike domains with newly registered SSL certificates. And this can also be one indication of maybe some future phishing threats are going to originate if we're seeing a lot of new SSL certs as well.
In terms of partner feeds, though, yesterday we said we had over 300 partner feeds. But examples can include things like VirusTotal, OpenPhish, Phishtank, et cetera. Feeds we already know, pulling in all that data and looking for threats for our clients.
While not client-specific, we do source URLs submitted to these platforms and utilize regex patterns targeting HTML content and also URL structure to determine a threat score for the content that is coming from these feeds. Then if they are able to meet that threat score, that would get presented to an analyst to review. We also have client feeds as well.
Typically, if we're getting something from a client feed, that phishing campaign is probably already active. But we do, via APIs, get clients abuse box data and will locate threats on their behalf. This allows our automation to filter out noise, present analysts with the needed threats to review for content.
Obviously, this also saves an organization's time where our systems are able to clear out the noise really quickly and bring in the content that actually has phishing content for the analysts to review. Then obviously, if there's still a phish up on there, we can go ahead and get that URL mitigated. We'll talk about domain curation for the domain monitoring service here.
Collected domain intelligence must be analyzed to identify real threats. Searching domain intelligence for brand-related keywords and variations is necessary to find potential threats. However, domain strings can often unintentionally contain keywords.
Analysis is needed to remove these false positives from the process. This is where Fortra comes in. After our automation filters out a large amount of noise, our team of analysts will weed out the false positives by looking at key aspects of the identified domain.
A lot of different things that they're looking for are things like analyzing the domain string itself and score it based on its likelihood to be mistaken as a legitimate brand use. Is this domain closely matching keywords? Are there industry terms related to keywords also present in the domain?
If it was a banking client, are we seeing words like bank or loan, things like that? Is that also present in the name, which could confuse an end user if it has industry-related terms in it? Are there letters, symbols, or capitalization that would confuse an end user?
Are we seeing some of that? That could be an indication that this is something that would need to be put in front of a client. The way our team goes is if something we need client feedback on, that kind of content goes into pending input from the client.
If this is an active threat against the brand, if they review it and say, hey, we want mitigation done on this domain, it will get sent off to our takedown team and they will get that domain down. We also do have stuff that gets put into auto mitigation. This is based on our incident handling process.
For example, say anytime we found the threat type of a cryptocurrency scam, we can put stuff like that. Once it's found by the analyst and validated as a crypto scam, it'll be sent to automatic mitigation and clients won't have to approve the takedown of it or not. Probably most of our clients do take advantage of this IHP document just because it's less stuff they have to do, less stuff they have to approve the takedown on.
They just trust us to be like, hey, anytime they see a crypto scam, anytime they see something with brand abuse, they trust us to submit that and start to get that content taken down. But the important thing here is our team is doing all this curation under the organizations that we work with on their behalf. We're not sending clients data dumps of data and making them sift through it.
Our systems are filtering out a bunch of that noise early on, then our analysts are doing the rest of the curation of getting rid of false positives and just showing you the impersonation threats on the domain side that you need to be aware of. So the goal of curation is sort of the analysis and development of intelligence to be able to identify phishing threats and different things that phishing is kind of going the next step further where they're looking into threat actors, kits, credential files, and any other evidence that they can use for mitigation. Well, domains is more the way we're finding threats is based on the syntax of the domain, what's in the actual domain itself.
Phishing is doing that as well during their analysis, but they also are going that one step farther in analysis on their side. And they can see on here, during the creation process, the team is also reviewing the syntax of the page and also page content to see if it's targeting one of our clients. Similar to that, they're still looking at things that we went over with the typosquatting.
But one thing that they're doing that the domain service doesn't do, they're looking at are there known phishing directories indicating it could be part of the phishing kit, or is it a legit domain used as a directory to confuse end users? Our phishing team is analyzing the content of the page, seeing if it's personally in the brain, but they also are looking to see if it's potentially requesting sensitive information indicating involving the phish. They are also analyzing the HTML content to identify drop points, and this could include things like drop email, drop site, if they're sending it to a telegram channel, the credentials, local log files, et cetera.
And we do have some recommendations here. Once content is identified, we suggest our security team should actually review the webpages and search for any element that suggests malicious activity, and accounts with compromised credentials should be alerted and monitored for suspicious activity. Kind of with all the other services, we also perform all the mitigation on the behalf of our clients.
So it's not something our clients need to do. We know how to get the content down, so we will work to get that stuff removed on behalf, which is also saving you guys tons of time not having to do that work yourself. There's many different aspects of our mitigation network.
But the one I wanted to highlight, because I think it's kind of the core of what we do at Fortra is, and it's in the middle too, the strategic relationships. But here at Fortra, we do have effective relationships with our providers. Our goal is to work with providers to report content with high fidelity and get malicious content down in a timely manner.
Many of the registrars that we work with trust our submissions of reported malicious content and will get this content removed quickly. This can include things like kill switch integrations, as well as browser blocking pages. Browser blocking is key as it allows a warning for visitors while the content is reviewed for mitigation by the provider.
With our kill switch integrations, it allows for the domain to be suspended by the provider while they're actively investigating the threat on their end. We partner with providers to educate them also on the threats that we're seeing across our client base. This allows us to broaden the scope of malicious content and needs to be removed by giving them a current state of the threat landscape we are witnessing across their client base.
In addition, content we report to some providers, we can get pivot data off of those domains to find even more content that may be targeting their brands. Working with providers, we make sure that we are providing them with all the evidence they need to get the malicious threat removed. And this would include things like sending them screenshots of the threat, user agents and proxies we're using, so they can see the threat and show them how they can reach the threat live to show them like, hey, here's an active phish, here's all the evidence we have, and this is how we're viewing that content.
So they can mimic how we're doing it so they can get that content removed as quickly as possible. We also do have APIs with our providers to get them host information to help them on the takedown process of the reported content. And just lastly, and I thought this was an interesting stat that we had, but on the mitigation side, I think it's around 30% of all of our cases are automatically mitigated without an analyst having to do it.
So that is a large amount of cases of overall takedowns that an analyst isn't having to report that for takedown. We have automations in place with those are getting automated. So that is saving our team time, but it's also getting the threats down quicker for your organization.
And I just wanted to clarify with browser blocking, it doesn't mean that the threat is officially offline, but it does mean that the provider is actively working, they're investigating, working on getting that content removed. But when we're submitting stuff, especially if it's phishing pages, our team is reporting this for if it's Google-safe browsing, or if it's with Microsoft, we are getting blocking pages put up just so end users know like, hey, this site's probably risky, you probably shouldn't go to it. But one thing I just want to point out is that if an end user has their browser settings turned off, where these warnings don't come up, they could technically still get the phishing page.
But browser blocking is definitely an effective strategy along the mitigation process. We don't stop when it gets just to browser block. We continue to work with the provider until that phishing page or offending domain is officially down.
Some orgs may say that, hey, it's browser blocked. No, it's officially mitigated, but technically it's not. Probably one of the most important aspects of our service is the monitoring aspect.
And everything that the analysts know, validate and gets put into our web app, all that content is set to monitoring, which I think is important. When it's being monitored, it's kind of a worry off your back where say it's a park domain, you don't have to worry about constantly checking it over and over each day. Our system is monitoring these for changes every day.
And if a change does come up, our analysts will review the change, see if it's noteworthy or not. Then if it's changed threat type, that will get escalated in the system. And I think this is very useful, like I was saying, for domains that there's no evidence currently where we could get it mitigated.
Most providers aren't going to take down a monetized links page, which you'll see in a little bit on our parking page. Just because there's not really evidence that it's impersonating your brand yet, even if it's now showing in the syntax of the domain that it appears to be impersonating your brand. But with our content monitoring, the way it works is if there's a certain, it meets a certain threshold of change on the page, it gets sent to our system and the analysts will review it.
So we do have logic in place where it's measuring that level of change. And the level of change is for noticeable results on the page. And like I said, when evidence appears that there is enough for mitigation, then it would get sent over to their mitigation team to get that evidence removed.
So that's what the importance of this content monitoring is. One other thing to point out, even after we successfully get that domain taken down, that domain is still being monitored over the course of its lifespan in our system. And this is important because domains can come back online.
So say for some reason, we got a fish down and it came back online. As long as that domain's being monitored in our system, our analysts will get notified of the change. Now we can work on getting that domain down quickly if it did for some reason come back up.
All domains that are in our system are also monitored for MX record changes. And like we were talking about earlier, this record specifies which mail service accepts email for the domain. A domain may lack content, but if it has an MX record, it can send email through it.
And like I said, this can be abused by BEC, attacks, email phishing, and also scam campaigns. Establishing the presence of an MX record can help determine if the domain is malicious, if evidence is presented for our takedown efforts. So that is one thing I think is important.
Say it was a parked domain, but we have evidence that an MX record got set up on this domain, and maybe you guys have evidence of emails being sent out from that domain, even though the page could be parked at the moment. Evidence like that we would use in the mitigation process to get that content removed. So that is one scenario where the page may not have anything, but if we have that evidence about this email for the MX record is actually sending out phishing emails to your end users, if we have the emails and headers for those emails, now we know we want that evidence to submit in the takedown process.
So that is another important factor about the MX record monitoring, and that every domain is monitored for that, which is the cool part. And every time one gets changed, you do get alerted in our web app as well. I want to just highlight the large amount of branding content you can see.
So 59% of our overall threats for the main service had some kind of branding content on it, which I think kind of sheds a light on the importance of having an effective domain monitoring solution, because our clients' brands are getting impersonated pretty heavily if you look at how much data is there. Then if you do go to the next slide and break down the malicious part, which is the 12% here, 62% of the content that we found via the domain service was phishing content. And this is strictly the domain service finding phishing content where in the syntax of the domain itself, it hit on one of the patterns we have.
This wouldn't include where our phishing service specializes in is finding content on the page itself, where maybe the URL is not smoothing the brand, but the content is a baking login. Our phishing team is really good at that, but this is just for the overall domain service. But you can also see that we are getting a lot of counterfeiting, one thing that's big on the domain service, and also cryptocurrency scams.
Now, definitely a large part if you're in organizations within that space somewhat, there are a lot of crypto scams that go out monthly. All right, so next we're going to go through a bunch of different examples. This isn't every single threat that we see on these services, but kind of run some of the common ones that we do come across for our clients.
The first one here, and we just talked about it, are BEC. You can see here the image on the right, this is actually from our web app. You can see that our system detected a change that there was an MX record spun up on that domain.
Then you get automatically notified in our web app, hey, an MX record is on this domain. We know we do recommend people block that domain on their mail servers as well. But this is all automated and every time we add a domain, it's being monitored for these MX record changes that we see.
I also think too, if your organization does, if you have security awareness training or you run simulated phishing emails for educational purposes for your organization, now BEC is definitely a good one to use because it can do a lot of damage if the employee has put down a phish from a BEC attack. Because third actors can be quite clever with the targeting of these. But on the left is what a typical BEC email could look like.
I know we used to get them, people spoofing as their CEO and things like that, but they are pretty common out there. If you go to the next example, this is definitely a common threat we see across the domain service. We call these monetized links.
Now for all these domains that you see on the screen, all within the syntax, they're impersonating the brand. These are all banking clients that are clients of ours. But at its most benign, a threat actor can register a domain that impersonates a legitimate brand, park it, and serve ads to visitors.
This is what we call monetized links. Free domain service providers are particularly susceptible to abuse along with a number of cheap registrars that provide low cost bulk registrations. These providers are appealing because many don't require attackers to identify themselves.
Now with these contents here, most providers won't take these down just because there's really no evidence of maliciousness on it yet, no brand abuse or anything like that. But the great thing about our system is all these domains are being monitored daily. So if this goes from a monetized links to with branded content on your page or a phish, you would get notified and start the mitigation efforts on those.
If we go to the next example here, is a typical example of what we call park domains. And this might not be the point of it being monetized links, but the page is currently being parked. Our monitoring will check these park pages for changes when content can be added to the page or, like I was saying before, if an MS record is created for that domain.
Even though the pages may not be live, you could still send BEC attacks if you have an active MX record on the page. So these are all typosquatting our clients, but right now there's not enough evidence on the page to get the content removed. They're all, like I said, these are all monitored by our system.
Both these examples are either using client logos, mentioning the client name on the page itself. And obviously these are all the domain itself and the syntax is typosquatting domain. Evidence like these, if these are not domains that are either on our safe list of the domains owned by your company or ones that aren't authorized to post this content, no clients will submit this, like, hey, get these for mitigation, take them down, and we'll work to get these removed.
But these, I would say, are pretty common brand abuse cases that we see, using the company name or logos or even maybe stealing images from your site or whatnot. The next one we see is cryptocurrency scams. These pages include both possible usage of the company brand name or logo to deceive visitors to fall for possible crypto scams.
And I think probably across most of our client base, want these automatically taken down. So when I was talking about our IHP, a lot of clients will automatically say, hey, anytime we see a crypto scam that's either using our name or branding on it, we want those automatically submitted for mitigation. But these are what a typical crypto scam website looks like here.
Some of them even have areas where you may be able to log into, like you can see on the top sometimes they almost blur the areas where they could almost be phishing, depending on what they want. But we typically label these as crypto scams within our system. But content like this, no, we do get mitigated though.
If you go to next, I had two different examples here for phishing sites. And these would come from our CryptDev service. The two things I just wanted to point out that we're able to detect phishing content through the domain syntax as well as the content on the page.
You can see in the one example, it's not impersonating the Amazon brand within the domain itself, but now you can see the Amazon logging on the page. Now our system's able to detect what you see on the left, but also the one on the right you can see is a Netflix phish. We can also detect the phish itself based on the syntax in the domain.
So the phishing is able to detect those threats in both scenarios. So you'd have complete coverage there in terms of impersonating on the page versus impersonating in the domain syntax itself. The last example I had is something we also see on the CryptDev service is SEO, poisoning paid advertisements.
And I think these are successful because most people try to put trust in the top search results, but that obviously can be manipulated to put potential malicious links higher up and within the Google search results. Threat actors are also able to do things like restricting it to specific geolocations. So say your brand operated in a certain area, they could target that area, but someone in California may not be able to see it if they're making it geolocated up to that area.
So that's one tactic that they do. But also they can do unrestricted access by device where you may only be able to see the threat via mobile, trying to throw that into your browser and your computer, you're not going to be able to see that ad. But also content can be inaccessible without redirecting the advertisement.
And lastly, irregular users look like domains to appear more legitimate. So you can see with this Acme Bank brand, it's trying to pose as that bank, but you can see within the domain itself, it's obviously not the official link of that brand. So I know the whole SEO things, I know that can be a very dangerous threat, but now we do have coverage there as well on the credit dev side to detect threats from SEO and paid advertisements, which I think is another good coverage by our phishing service.
So just to kind of wrap things up for domain impersonation, I think it's important that you have a service that's be able to detect threats via the domain syntax, but also the contents of the page. Having a service that's only doing one of them, you're going to definitely miss tons of threats that are out there. And I would say that anyone that has an active website or a login pages need these services because threat actors find a way to impersonate your brand.
If there's potentially no financial gain to target your brands, they are going to do that. I think the overall theme of what I wanted to showcase to you guys today was to protect against the whole attack chain, you need to have an impersonating monitoring service on the domain side and the credit dev side that's doing the full gambit of curation, collection, mitigation, and monitoring. And the great thing since we're a managed service, we're doing all that work on your behalf.
We're not putting that burden on your team to do it. So we're saving your team time, but we're also just showing you the threats that you need to be worried about if we do need input from you. So it's definitely saving tons of man hours on your end that we're doing all this on your behalf, especially the monitoring aspect.
You can kind of rest assured that if you find a type of squad of a part domain that currently does not have any content on it, if there is noticeable change on it where it changes from a part domain to a brand abuse or a phish, you're going to get notified about it then. If there's evidence for mitigation, our team is going to work to get that content removed. And lastly, I always spoke about it a bunch, but just monitoring for MX record changes is something that you definitely want to have as well, just with the ease of being able to send out a BTC attack.
It's important to have this kind of monitoring as well for all your domains.
[Anthony Jimenez]
Thanks for listening. And thank you to our guest, Nick Oram. Don't forget to like, comment, and subscribe to Carahcast and be sure to listen to our other discussions.
If you'd like more information on how Fortra can assist your organization, please visit www.Carahsoft.com/Fortra or email us at Fortra@Carahsoft.com. Thanks again for listening and have a great day.