-
Posts
3,101 -
Joined
-
Last visited
-
Days Won
476
Content Type
Profiles
Forums
Gallery
Downloads
Articles
Store
Blogs
Everything posted by datakick
-
No, it will not
-
I'm not sure, I didn't test. The upcoming module is a backport of mollie module for ps17, so hopefully it will be supported.
-
The other disallow directive is for stores without friendly urls enabled - there is no change in blackhole name there The robots.txt should look like this: User-agent: * Disallow: */blackholenew/ Disallow: /modules/blackholebots/blackhole/* Note the * before /blackholenew/ -- it's to block language variants as well
-
I've released new version of the module that will allow you to change the trap URL. If bing is already indexing your trap url for any reason, you can change it from https://domain.com/blackhole to something new like https://domain.com/my-honey-trap. (and change robots.txt accordingly) This way, when bing sends a traffic to your website to /blackhole address, it will not be blocked. To prevent 404, I suggest you add redirect from /blackhole to homepage into your .htaccess file as well. Hopefully, bing will not add the new trap url to the index again. I've added some extra precaution to prevent this as well -- if the known good bot (google, bing, etc) somehow make it to your trap url (even when the robots.txt blocks it), then the content of the trap page will be mostly empty, and page headers will contain <meta name="robots" content="noindex"> that will instruct bot to not index this page.
-
We have custom sendylists module to synchronize thirty bees with lists in sendy. However, we had to modify the sendy installation itself, because it doesn't support bulk api call by default. So this module is not useful without that modification. That's one of the reasons why we didn't release it publicly.
-
A few things when importing products via CSV
datakick replied to 312erik123's question in Technical help
Thanks. 1) We know about the the first set of warnings -- this is issue with legacy code, that stored transient information into objects during import. It will take a lot of refactoring to fix this, unfortunately. Currently it's not high on priority list, as it's still just a deprecation warning. This will be issue on php9, tough 2) Tools::getDateFromDateFormat - thanks, fixed in bleeding edge 3) consistency check module -- I haven't updated the module to check for all supported extensions -- do not use this module on bleeding edge yet -
Should be fixed in bleeding edge, please try
-
It's a bug in core, I've managed to reproduced on my dev env. I'll prepare a fix and let you know.
-
I very much agree with @the.rampage.rado From security point of view, it's much better to install thirty bees into standalone subdomain. I have seen thirty bees sites that were infected because of wordpress installation. Using subdomains has some additional benefits as well. For example, less cookies will be sent -- thirty bees will not receive wordpress cookies. Or misconfiguration in .htaccess within parent (wordpress) directory will not affect thirty bees web. And I'm sure there are more. Install in in standalone, isolated, subdomain.
-
As Smile wrote -- adding new field to db, and implement basic CRUD operation on top of it, is quite simple. Anyone who really need it can can implement this functionality. Hoverer, to add this properly to the core, is more complex. We have to considerer other aspects as well, for example: support this field in CSV import expose this field in Webservice, both read and write access impact on themes -- should this be exposed to theme? Do we need to update community-managed themes? since this is a classification field, maybe we should have a new table with all possible values instead of free-text? If that's the case, we need to populate this table during install AND during store update, and make sure the table values are up-to-date (which means future work is needed) and who knows what else It would take a few hours, maybe even days, to do that. It was not worth the effort at the time - no real demand for this field, and if anyone really needs they can create a limited implementation themselves.
-
Yes, you should install collectlogs module, and let it collect errors for a few days. You have to fix all deprecation warnings - core code should be ok, but you will get some errors from modules and theme. After all warnings are fixed, you can safely update to higher version of PHP. Always update to next PHP level only - from 7.4 to 8.0. When you later want to update to 8.1, you will have to repeat this process again.
-
Your hosting provider suspended your account for some reason. Maybe you didn't pay, or maybe your site was sending spam emails,... Contact your hosting provider
-
Pages blocked by robots.txt after updating to 1.5
datakick replied to Derbai's question in Updating thirty bees
When I tested it yesterday it indeed displayed the robots.txt error. I assume that google used, at that time, some old (cached) version of robots.txt that blocked the access. -
Pages blocked by robots.txt after updating to 1.5
datakick replied to Derbai's question in Updating thirty bees
My guess is that your server is blocking requests from google servers (maybe you have some firewall or something that block it) This is not an application issue. You can verify this by checking your server access log -
@x97wehner Without error logs, it's not possible to say what is causing this. The module probably catch the exception, and returns this generic error message. You will have to modify the place where this happens, and add some additional logging -- display exception to output, or re-throw it and let collectlogs catch and log it. Or something similar.
-
klarna payment method is now supported in stripe v1.9.4
-
Updating from 1.5.1 to Bleeding Edge Breaks Product Images
datakick replied to Rhapsody's question in Updating thirty bees
Maybe you have nginx in front of your apache? Or some other caching server? -
Yes, but since that note was written, the proper fix was released -- see the patch. That note is no longer relevant.