Jump to content

Welcome, Guest!

By registering with us, you'll be able to discuss, share and private message with other members of our community.

  • 0
dynambee

Webservice (API) 403 Error

Question

Posted (edited)

After a particularly terrible experience with Cloudways support I'm trying to migrate away from Cloudways and towards a self-managed VULTR server running Centos7 + Centos Web Panel (CWP). (It took Cloudways support 18 hours to resolve an SSH key issue that they somehow created. I couldn't access the server with public/private keys during this time. I have no idea how long it would take them to fix an actually serious problem and I do not wish to find out...)

After some initial struggles things are now working quite well and I am learning how to get things done within CWP. I can get sites installed and running quite easily now and am overall very happy with the way CWP works.

I do however have one problem, and that is the webservice (aka api) is not operating properly. I can send GET requests and get data from the webservice. These show up in the apache logs with status code 200 and the data is displayed correctly.

However when I try to use a PUT request to make changes to server data it fails with a 403 Forbidden error. This same code works just fine on cloudways and has worked with other hosts so I think there is some sort of permissions error going on. For the life of me I cannot figure this out.

I have spent a lot of time over the past couple of days banging my head against the wall which has been great for learning but I'm out of ideas. There are a lot of suggestions on Stack Exchange about how to rectify common 403 errors with Apache but none have worked for me. I have been very careful about not making any permanent changes and always undoing any changes I make along the way.

A few of the things I have tried:

* Making sure perms for all folders are set to 755 and for files to 644. These are the recommended perms by CWP.

* Making sure perms for all folders are set to 775 and for files to 664. These more permissive perms are what Cloudways uses.

* Adding "Options FollowSymLinks" to the .htaccess file (this changed the error code from 403 to 500 and made the entire website inaccessible) 

* Adding "Options +FollowSymLinks" to the .htaccess file (this changed the error code from 403 to 500 and made the entire website inaccessible) 

* Making sure mod_security is OFF for the domain I am working with (can be turned on and off on a per-domain basis with CWP)

* A LOT of time trying to interpret the config files for Apache which are really above my ability to understand. Despite doing a LOT of reading about these files in the end I didn't feel comfortable monkeying with them lest I make things much worse.

Some info on the server:

CentOS 7 with all updates
CWP 0.9.8.836
Apache 2.4.39
Nginx 1.16.0
PHP-FPM v7.1.30
MariaDB 10.1.40
VULTR's standard firewall is disabled, the firewall included with CWP is enabled (CSF + LFD)

Varnish is also installed but is disabled for this domain. So this domain is running using a similar set of services to what Cloudways uses, Nginx -> Apache -> PHP-FPM.

This overall server setup was created by CWP. In other words, I didn't personally install and configure each service, the server is an automated install using CWP that configures things in a standard way.

So, does anyone have any ideas on what I need to change to make this work?

Edited by dynambee
Info about why I wish to change hosts

Share this post


Link to post
Share on other sites

Recommended Posts

  • 0

A few things I forgot to include:

* I'm running TB "1.0.8-1.0.x bleeding edge" with all updates as of yesterday

* I have confirmed that the TB webservice (API) key that I am using has full permissions and is not blocked from PUT or POST requests

* I can manually make changes through the back office web interface and they are saved properly. (Tested with the same changes I attempted by API.)

A couple of lines from the access_log file:

"GET /api/countries?display=full&date=1&ws_key=[key removed] HTTP/1.0" 200 163545
"PUT /api/countries?ws_key=[key removed] HTTP/1.0" 403 222
 

 

Share this post


Link to post
Share on other sites
  • 0

Thanks for the quick reply @lesley.

CWP takes care of the overall configuration settings so I haven't had to set things up (heh, screw things up...) manually. However it is using PHP-FPM which as far as I understand it (and that probably isn't very far...) is a process manager to manage FastCGI. So yes, I believe the server is using fcgi.

I have tested the webservice with "Enable CGI mode for PHP" both on and off.

I also just used a Firefox plugin to test the webservice from Firefox to see if it was something about how the request was being formed or if there was a different problem with my automation but it returned the same 403 forbidden error.

I sent the following XML via Firefox:

<?xml version="1.0" encoding="utf-8"?>
<thirtybees>
<country>
  <id xmlns="">1</id>
  <id_zone xmlns="">0</id_zone>
  <id_currency xmlns="">0</id_currency>
  <call_prefix xmlns="">49</call_prefix>
  <iso_code xmlns="">DE</iso_code>
  <active xmlns="">0</active>
  <contains_states xmlns="">0</contains_states>
  <need_identification_number xmlns="">0</need_identification_number>
  <need_zip_code xmlns="">1</need_zip_code>
  <zip_code_format xmlns="">NNNNN</zip_code_format>
  <display_tax_label xmlns="">0</display_tax_label>
  <name xmlns="">
    <language id="1">Germany</language>
  </name>
</country>
</thirtybees>

 

The exact response via Firefox was as follows:

Server: nginx/1.16.0
Date: Tue, 25 Jun 2019 06:16:27 GMT
Content-Type: text/html; charset=iso-8859-1
Content-Length: 222
Connection: keep-alive
Keep-Alive: timeout=60

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access /api/countries
on this server.<br />
</p>
</body></html>

 

I have also done tests with all the server firewalls temporarily disabled to make sure nothing was being unexpectedly blocked.

I think (but could easily be wrong!) that I have narrowed it down to a webserver configuration issue. GET works fine but PUT fails.

Share this post


Link to post
Share on other sites
  • 0

I'd be more than happy to give you access to the server if you would be willing to have a look around.

Share this post


Link to post
Share on other sites
  • 0

After commenting out those lines the GET request failed with a "![CDATA[Authentication key is empty]" error. The PUT request failed with the same 403 forbidden error as before. Uncommenting those lines returned things to as they were previously (GET=ok, PUT=403).

Share this post


Link to post
Share on other sites
  • 0

I am inclined to think that it might be a mod_sec issue even though you mentioned it is disabled. Can you try this, I know it will sound stupid, but can you go to the back office, go to a product page, click on one of the description boxes and try to upload an image there. See if the image uploads. 

Share this post


Link to post
Share on other sites
  • 0
4 minutes ago, lesley said:

can you go to the back office, go to a product page, click on one of the description boxes and try to upload an image there. See if the image uploads. 

No problem, willing to try anything at this point!

The image uploaded fine and appears correctly on the product in both the back and front offices.

Share this post


Link to post
Share on other sites
  • 0

Ok, yeah, I am thinking it is a mod_sec or firewall issue. Have you checked through the mod_sec log to see if it is triggering? I know you mentioned it is turned off, but if its not it should be logging. (Like if you think it is turned off but its not really turned off). 

Share this post


Link to post
Share on other sites
  • 0

Thats is not where I was talking about. I meant upload it to the short or long description. The reason is if an image does not upload there and pulls a 403 error it means the temp path in path is messed up. That is me trouble shooting through the different causes of a 403 error. 

Share this post


Link to post
Share on other sites
  • 0
1 minute ago, lesley said:

Ok, yeah, I am thinking it is a mod_sec or firewall issue. Have you checked through the mod_sec log to see if it is triggering? I know you mentioned it is turned off, but if its not it should be logging. (Like if you think it is turned off but its not really turned off). 

mod_sec is definitely turned off. I actually had this problem before mod_security was installed at all. I was starting to get desperate so I installed mod_security it in the hopes that either it would make the problem go away (lol) or would give me some sort of insight into what the problem was. Neither seemed to happen so I disabled it for the domain with tb installed. I have also checked the logs and there are no ModSecurity errors in them at all.

Having testing (and had the same results) with the firewall off, with mod_security off, and having reset all the file permissions I figured this might be some sort of apache configuration issue.

I'm good a troubleshooting (trial and error, narrow things down carefully) but not skilled with webserver configs, unfortunately.

Share this post


Link to post
Share on other sites
  • 0
3 minutes ago, lesley said:

Thats is not where I was talking about. I meant upload it to the short or long description. The reason is if an image does not upload there and pulls a 403 error it means the temp path in path is messed up. That is me trouble shooting through the different causes of a 403 error. 

I uploaded the image in the back office product catalog. However I just tried adding an image to the description and uploading it that way and that works too:

image.png.ae34c4ee8f1d47d86c470d12fe78c9d7.png

Share this post


Link to post
Share on other sites
  • 0

(My apologies for the image misunderstanding. Since I do everything through the webservice I'm actually not all that familiar with how the products section of the back office works.)

Share this post


Link to post
Share on other sites
  • 0
Posted (edited)

Just for certainty I rechecked the error logs with this command:

grep [my office ip removed] *.error.log|grep ModSecurity

There were no results.

I have a fixed IP here so there is no chance that errors would appear under a previous dynamic IP or anything like that. However to be extra safe I did this as well:

grep ModSecurity *.error.log

There were still no errors.

However when I run this:

grep proxy_fcgi *.error.log

I get a lot of general errors that don't seem to be related to the API. I haven't had to check this sort of thing before (I don't think I even had access to these logs on Cloudways) so I don't know if this is normal or not. This error seems to repeat many times, I guess from the recent back office product edit for that photo upload:

[domain removed].error.log:[Tue Jun 25 07:13:29.934861 2019] [proxy_fcgi:error] [pid 1945:tid 139881086928640] [client [ip removed]:60690] 
AH01071: Got error 'PHP message: PHP Warning:  Declaration of Cart::getDeliveryOptionList() should be compatible with CartCore::getDeliveryOptionList(?Country $defaultCountry = NULL, $flush = false) in 
/home/[removed]/public_html/override/classes/Cart.php on line 0\n', 
referer: https://[domain removed]/admin/index.php?controller=AdminProducts&id_product=1&updateproduct&conf=4&key_tab=Informations&token=[token removed]

 

Edit: Credit where it's due, above grep commands and log locations learned from here.

 

Edited by dynambee

Share this post


Link to post
Share on other sites
  • 0

After spending a bunch more time and doing a lot more reading I have managed to figure out something that works, but I don't know if it is a good solution or not from a security standpoint.

I modified the Apache vhost configuration file as follows:

<Directory "/home/[sitedir]/public_html">
	# Commented out this line:
	#Options -Indexes -FollowSymLinks +SymLinksIfOwnerMatch
	
	# Added this line:
	Options FollowSymLinks Indexes
	
	# No change to this line:
	AllowOverride All Options=ExecCGI,Includes,IncludesNOEXEC,Indexes,MultiViews,SymLinksIfOwnerMatch
	
	# Added these 3 lines:
	Order allow,deny
	Allow from all
	Require all granted
</Directory>

I have seen many people post about getting 403 Forbidden errors when trying to use PUT with the Prestashop 1.6 webservice. Some manage solutions through the .htaccess file but others haven't been able to fix the problem and just end up changing hosts. It would seem somewhat likely that the problem is that the host has locked everything down extra-tight thus causing this problem.

As I mentioned above though, I am not sure how much the changes I made impact the security of the site. I am not concerned about other users on the server as the server is mine, there won't be any other users. I am however concerned that it might weaken external security. Does anyone have any thoughts or feedback about these changes from a security standpoint??

 

 

Share this post


Link to post
Share on other sites
  • 0

It's been quite an adventure moving away from managed hosting. Slowly getting a handle on it, haven't had to do anything like this for a while. Good for my brain. 🙂

Share this post


Link to post
Share on other sites
  • 0
9 hours ago, Factor said:

Yeah picking CWP over cPanel..  Braver man than I..

There is certainly no denying CWP's steep learning curve compared to cPanel or Plesk. However I think CWP is easier to work with than something like ISPConfig which I couldn't even get to install properly. I tried a bunch of other well-known control panels as well but for one reason or another I found CWP to be the best fit for what I want to do. It's powerful, flexible, and not insanely over-complicated. It probably helps that it doesn't try to work with a dozen different distros and just focuses on CentOS.

 

 

Share this post


Link to post
Share on other sites
  • 0
Posted (edited)

ok I dont use the Webservice but from PS documentation

http://doc.prestashop.com/display/PS16/Web+service+tutorial

I think you did all of this but just checking

To test if you have properly configured your access to the web service, try to access your online shop with the following URL: http://mypasskey@example.com/api/, where mypasskey is replaced with the key you created, and example.com with your shop's URL. Mozilla Firefox is the preferred browser to test your access, but any browser able to display XML should do just fine.

Do you have php-curl installed 

type at cmd prompt 

php -v

and

php -m

is curl in this list?

since CWP complies php from source you might need to make sure it's in there.

Also it could just be a general comment but 

I see this

http://doc.prestashop.com/display/PS16/Web+service+tutorial

Prerequisites

  • PrestaShop 1.6 installed on a server with mod_rewrite enabled (Apache only).
Edited by Factor

Share this post


Link to post
Share on other sites
  • 0

I didn't have problems accessing the webservice, it managed GET requests perfectly, so the webservice itself was fine.

The issue was that Apache was configured in such a way as to block PUT requests, or at least the type of PUT request needed for the webservice. This isn't necessarily a bad thing as the vast majority of people don't use APIs and don't need this sort of access. In any case this meant I could retrieve data from the webservice but not make any changes. I couldn't enable/disable carriers/countries/zones/etc or send any delete requests.

After making the changes I listed to the vhost file for my domain I can fully work with the webservice, exactly as I could on Cloudways. I'm still waiting for confirmation from the CWP support guys but for my particular situation I don't think I my changes have affected server security in any meaningful way. (It could make a cross-site attack easier in some situations but since all the sites on the server will be mine I am not concerned about this.)

Share this post


Link to post
Share on other sites
  • 0

It seems that the solution I posted above (changes to the vhost settings) is a good overall solution for my situation and does not weaken external site security.

There is a potential issue where Options FollowSymLinks can make it easier for an attacker who has access to one site to attack other sites on the same server. For me this isn't a problem because a) all the sites will be mine, and b) all the sites will be running TB. If an attacker finds a hole in one TB install the same hole is going to exist in all of them. The attacker will *already* have access to all the sites without having to try to exploit a cross-site attack.

So for now I'm happy. All the API automation code I have in place is working and I am adding & testing more each day. Looking forward to getting my first 4 or 5 sites up and running soon. 🙂

  • Like 1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×