Future ransomware?

In the history of computer viruses, they have gone from experimental, to stealthy, to dangerous viruses such as C.I.H. By applying the CMM model on malware maturity, it’s easy to see that the art of creating malware follows the steps indicated in the model.


But, each process can be divided into sub processes, and onto each sub process the CMM model can be applied. Consider the maturity for ransomware as a sub process of malware as a whole. When considering this, also take the following snapshot from “Google Trends” into consideration.


What happens when the ransomware developers realize that ransomware can be extended, not only to data, but to the control over IoT devices? The chilling realization that not only are you not in control of your own home, but somebody else is…

We are not there quite yet – but ransomware keeps menacing the computer society, but thankfully there are small beacons of light when it comes to fighting them. One such beacon is the web service “No more ransom”, a service provided by Europol and other actors.

Hashcat and GoCrack

Hashcat – our favorite password cracking tool – has been released in a new major release, v 4.0.

Among the new features, one feature worth mentioning is the support to crack passwords and salts up to 256 characters.


Just days before, Fireeye released an open source software on github to create and manage a clustered setup of hashcat.


More information on the new hashcat release can be found here.

Latest and greatest ransomware

UPDATE: Malwarelabs have released a post that takes a closer look on BadRabbit (Petya/NotPetya)

As mentioned yesterday, ransomware attacks have become the new normal. This is emphasized by news articles last night. Technical safeguards such as Microsofts “Controlled Folder Access” and other techniques will help.

The problem lies in usability. Many of these attacks could already have been mitigated using existing technology, such as application whitelisting, which exists in many different flavors. Although this technology would – if properly deployed – prevent most ransomware attacks, this affects the end user’s experience adversely and is thus not deployed unless there are very high security requirements.

Hopefully, both application whitelisting and products similar to “controlled folder access” will create a protection that can withstand most “en masse” malwares without a negative user experience.

Windows 10 creators fall update – ransomware protection

Unfortunately, the battle against ransomware have become the “new normal”. Most IT security administrators probably focus their efforts into updating anti-malware suites and maintaining a good backup scheme to maintain a security posture against ransomware attacks.

With this in mind, Microsoft releases a really nice feature in the Windows 10 creators fall update, called “controlled folder access” in which you can limit file modification rights to specific applications only or – as the default setting is – none!

Possibly a feature that could severely cripple your system if used carelessly, but a really nice feature that hopefully will mature and become a base platform for most new computers.

Read more here, here and here.

Heartbleed and Shellshock… CVE-2014-7203 and counting

2014 seems to be the year of the Big Bugs… First, Heartbleed was publicly enclosed in april this year. And now, just a couple of days ago; Shellshock was released. Both of them are massive bugs; the kind of bugs that should be extinct by now. But, indeed they are not. Even mainstream media such as Washington post, ABC News and Swedish news paper Aftonbladet has already written articles on the subject.

And apparently, both Shellshock and Heartbleed has reached wikipedia…

So, what do they do? Well, they both are bugs that really rock the foundation of Internet. In short, Heartbleed is a bug that discloses information stored in the physical memory of a vulnerable server that uses OpenSSL. OpenSSL is used for encrypting, and is typically used by mail servers and web merchants where the traffic should be encrypted and not interceptable by others. But, the server will likely keep that information stored in the physical memory and could therefore be disclosed to malicious users, trying to exploit the bug.

Shellshock might even be worse, considering the simplicity in exploitation… Merely changing your User-Agent when browsing the web will be a simple way to exploit vulnerable web servers, and many web servers have been flooded with User-Agent strings similar to this “() { foo;};echo;/bin/cat /etc/passwd”. Troy Hunt has written a great article on the simplicity and impact of this bug.

Is Shellshock the last major bug we’ll see? Nope. Maybe the last major bug in 2014, but still… I’m not sure. CVE-2014-7203 and counting…

Quick tip, visual aid (updated for Win 8)

In a recent blog post ( https://www.ictsecurity.se/?p=96 ) I wrote about how to create a visual aid for whether the command line is run with administrative rights or not. In short, the visual aid was depending on a command that was run everytime that the command line was invoked. In Windows 7 and earlier versions, FSUTIL.exe did this excellently, since that command required administrative rights to be run. However, Windows 8 and onwards can run FSUTIL even without administrative rights. So, I needed to find a substitute. What I came up with was the following command

%windir%\system32\auditpol.exe /get /sd > nul 2> nul && (color 0C & title %USERDOMAIN%\\%USERNAME%) || (color 0A & title Non Administrator - %USERDOMAIN%\\%USERNAME%)

This executes the command auditpol.exe /get /sd and suppresses both standard output and error output. If the command is successful, it was executed as an administrator and the command “color 0C & title %USERDOMAIN%\\%USERNAME%” is run, which alters the title and color. If the command is unsuccessful, it was executed with normal rights and the command “color 0A & title Non Administrator – %USERDOMAIN%\\%USERNAME%” is run, which alters the title and color appropriately. So, this time around, create .reg-file with the following contents and import it into the registry.

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Command Processor]
"Autorun"="%windir%\\system32\\auditpol.exe /get /sd > nul 2> nul && (color 0C & title %USERDOMAIN%\\%USERNAME%) || (color 0A & title Non Administrator - %USERDOMAIN%\\%USERNAME%)"

This will result in a separate color for admin and non-admin instances of a command shell, which can help prevent commands mistakenly executed with to high privileges. CMD as Admin

Signing mail with DKIM on Postfix and Amavisd

There are quite a few tutorials (e.g. here or here) available on how to use DKIM signing on outgoing mails on Postfix. Both are good (and pretty similar), but in my opinion they are missing a vital point for the reader’s understanding.

  smtpd_sender_restrictions =
    check_sender_access regexp:/etc/postfix/tag_as_originating.re
    check_sender_access regexp:/etc/postfix/tag_as_foreign.re

The above configuration in the main.cf-file in Postfix makes sure that incoming mails aren’t signed by the Postfix server. Any explanation on how this magically works is not given, though. However, in following thread a great explanation is given.

Briefly explained, it has got to do with the order. The first check_sender_access merely updates the content_filter with a redirect to port 10026, which indicates to Amavis that the mail is an outgoing mail. Postfix then traverses through the permitted senders. If the sender matches any of these rules(mynetworks, sasl_authenticated or tls_clientcerts) the sender verification is stopped, and hence the first check_sender_access is the only one that gets executed.

If the sender doesn’t match any of the rules, the mail must be an incoming mail and should therefore not be signed. This means that none of the rules will generate a match, and therefore the last check_sender_access is executed, which updates the content_filter to 10024 which indicates to Amavis that the mail is an incoming mail.

PCI DSS. Having a standard is good. Double standards are twice as good?

Late 2013, the company target got hacked which led to a heated debate on how to secure cardholder data, and the already familiar weaknesses that magnetic stripes induce. Using a card with an EMV chip and a pin code is since long known to result in better security. There are some debate on whether this would have helped in this breach though, as written about in an article on the site Bank Systems and Technology. Personally I find a more interesting standpoint on the whole matter is the one that Avivah Litan a member of  the Gartner blog network makes, and is what I refer to in the title. The following quote is taken from her blog post “How PCI failed Target and U.S Consumers”.

"Of course, Visa,  MasterCard and the qualified security assessors who perform the PCI audits have all covered themselves legally.  That’s one area where they’ve been proactive. The assessor contracts that retailers and processors sign state that the assessor has no liability in the case of a breach. Further, when PCI first came out, Visa and MasterCard used to give merchants “safe harbor” from penalties in the case of breaches when the breached merchant was PCI compliant.  But they eliminated that safe harbor right after the first big breach.  When I asked Visa to explain, they told me “well the merchant must not have really been PCI compliant if they got breached.  And perhaps they didn’t give their assessor all the information they needed to properly audit their systems.”

So, to summarize:

1. If you’re dealing with credit cards, you have to adhere to the PCI/DSS
2. Using PCI/DSS involves extra costs, such as QSA audits and the additional security measures that needs to be implemented in the Cardholder Data Environment
3. Even if compliance is achieved, you’re not receiving any help if a breach has been made, instead you’re fined for “obviously being non-compliant”…

Sounds like a rock solid business plan…