Jump to content
thirty bees forum

gandalf

Members
  • Posts

    26
  • Joined

  • Last visited

Everything posted by gandalf

  1. My only idea is to add some fields in the customer registration form. Via Javascript, on form submit, take these fields value, concatenate on an existing filed (so that it will be submitted with anything else) and, with some actionObjectCustomerAddBefore and actionObjectCustomerUpdateBefore split these concatenated fields, save properly in DB and let everything else flow through. Just a proof-of-concept
  2. @gonssal said in How to add custom fields without overrides: @gandalf the Customer class extends ObjectModel, which provides multiple hooks through the object lifecycle. In your case you would be interested in actionObjectCustomerAddAfter and actionObjectCustomerUpdateAfter. Probably also actionObjectCustomerUpdateAfter to clean leftover data. How would you use this hook to add, let's say, 2 mandatory custom fields in the customer registration form ? Can you make me an example ?
  3. @traumflug said in How to add custom fields without overrides: Write code for just the change and call the parent method for everything else. Exactly. This is what we do when we are forced to use overrides. But this is possible only if parent function can be called (before or after customization). If you need to customize something "in the middle" of parent function, you can't simply call the parent. And, as Murphy's Law tells you, most of the time you need to customize the middle of the parent method :) I've never seen "HOOKCREATEACCOUNT_FORM" before, any example on how to use this ? I don't think would be possible to use this hook to save additional data to the customer table, as this extra field won't be saved by the TB controller. Isn't it ?
  4. @datakick said in How to add custom fields without overrides: @gandalf why the no override requirement? Because it's a perfect recipe for a disaster. Overriding core features is wrong, under any point of view. During a TB update you have to remember which modules are overriding which function and update these module accordingly and this is not always possible. You could have created an ovverride based on a function that is removed or changed in newer version and you can't bring that back without breaking everything else (discovered this in the worst way on a production ecommerce) Probably, some hooks to be added in the core would do the trick without any overrides. As PS doesn't have any hook for doing this, i've thought that TB added something. Maybe a pull-request would be accepted for this ?
  5. AFAIK some overrides (or changes to the core files) are needed to write a module for this
  6. Hi to all We need to add 2 custom fields during customer registration (just after date of birth) but without using overrides. Is this possible ? Any workaround ?
  7. There is also a paid module to fix this bug: https://www.presta-addons.net/en/content/7-incorrect-tax-rounding-and-application-in-prestashop-161
  8. rounding each item makes another issue: PS/TB is rounding each product, even when calculating taxes, so: 9.07*24 = 217,68 then, when calculating price with tax, it does: round(9,07+22%) = 11,07. 11,07 * 24 = 265,68 Total VAT is 265,68 - 217,68 = 48 euros. This is wrong, real vat is: ((9.07 * 24) + 22%)) - 9.0724 = 265,57 - 217,68 = 47,89 That is equal to: 9.0724*22% = 47,89 Setting rounding to the total expose to the original issue: frontoffice and backoffice are still rounded to 2 decimals for viewing, so is misleading to customers, it would be shown a price that is different from the price used for calculating taxes and sums. Another question: why there is a TBPRICEDATABASEPRECISION_ constant hardcoded to 6 ? I've tried to set it to "2", the same as backoffice and this fixed all rounding issues except for the tax calculation (as wrote above)
  9. Simple example, i've attached a screenshot. Current rounding schema is: "Round up away from zero", "Round on each item", 2 decimals backoffice: cart on the frontend: € 9.07 * 24 is NOT € 217,74 but is € 217,68 The error is coming form the rounding. This product has an original price (ex vat) of € 9,55 The customer group has a 5% discount, thus: € 9,55 - 5% = € 9.0725 not rounded. If you do € 9.0725 * 24 you'll get € 217,74 but I've set round on each item, so PS/TB should round each product (€ 9.0725 to € 9.07) then multiply it for 24. There are two issues in this: 1) the frontoffice and backoffice are still showing the proper product price (€ 9.07) but any following calculation are wrong, so in the same page (frontoffice or backoffce) there is a product price that multiplied to the ordered quantities doesn't return the shown price. This is wrong and misleading for customers and administrators) 2) VAT is calculated on the wrong price (the non-rounded one) thus there is also a fiscal issue. EDIT: changing the rounding schema doesn't help, because on frontend and on backoffice you must round each product item to show it as 2 decimal. Thus even by setting the rounding procedure to each line or to total, you'll still get the same rounding issue. Internelly PS/TB is using the non-rounded price but is showing the rounded price to 2 decimals, so a rounding on each item is mandatory (at least for viewing the cart).
  10. This happens in B2B stores, where vat must be added (in a B2C vat is included in the product price and as so you don't have vat calculation to do) AND by using a customers group discount like 5% If product price ex-vat is 9.55 and customer has a 5% discount on it's group, sold price is 9.0725 From now on, each calc Is wrong, on on tax, or on what Is shown in the backoffice or on order total, based on what Is used for rounding (that Is not consistent)
  11. It Is a bug, that Is why i have opened this post in the bug section Tax calculation doesn't respect the choosen rounding schema. Additionally, the back Office seems to not respect the rounding schema because if I set to round only the total (and not each item) prices in the order detail, are still rounded one by one (it can't be differently, PS/TB stores prices as 4 or 6 digits on database but only show 2 digits in backoffice, thus there Is a rounding somewhere even if i set to round only the total) In other words, rounding only the totals (this Is not what i want but it's easier to explain) will make inconsistecies on what administrator see (each price for each ordered item) to what PS/TB calculate as Total If you manually put each order line (rounded one by one regardless the rounding schema) to Excel and make a sum, the order total doesn't match To fix this i have set to round each item. This will fix the inconsistecies issue between the back office and the order total , but tax calculation Is still made by using the original, not rounded, 4/6 digits price so now there is an inconsistency between the order total and order vat On PS there are at least 10 or 15 bugs regarding the rounding issue, vat calculation and so on. I've thought that in TB , forked to be more focused on bug fixing than adding new buggy-features like PS, these where fixed but I've seen the same bug even here
  12. No, is not correct for all, because is not respecting the rounding. If you choose a rounding schema, you have to use the same schema everywhere. Currently, the admin interface is rounding in a way (regardless the rounding schema set in the back office), product prices are using the rounding schema configured but tax calculation not. If I set to use a certain rounding schema, the same schema must be used for vat calculation or the ending result is to calculate vat on a wrong price. As wrote above: product price is 12.18 (rounded properly), thus you have to calculate vat on 12.18 and not on the "original" not rounded price (12.1753). If you do so, at least in the whole europe, you are calculating taxes on a wrong amount and the is a mismatch even in what customers see: total price 12.18, vat (22%) 14.85 but should be 12.18+22%=14.86
  13. In a B2B store, we are using round each item, 2 decimal, and we set all products with price excluded taxes. Some customers has a group discount (ie, 5%) Tax calculation is wrong, probably it doesn't round the product price before calculating the tax amount. In example, 22% of 12.1753 is = 14.8538 but when using 2 decimal, 22% of 12.18 is 14.8596 that is 14.86. This is 1 cent more. If you have multiple items in the cart, in example, a product with 48 or 56 bottles, you have some euros less. This is the same in PS 1.6 to 1.7.2, there are some long-date opened bugs with no fix (they are focused to some "higher priority" tasks than fixing the tax calculation O_o ) Probably , PS/TB internally are using the number without rounding (so, with 4 decimal precision) because in psorderdetail I can the product stored with 4 decimals, in example, we have a product (real "correct" price is 4.85) stored as "4.845000" in psorderdetails. Customer bought 6 of this, thus real total price should be 4.85*6 = 29.10 but PS/TB is calculating it as 29.07 In backoffice I can see the correct price (29.10) but the total tax calculation for the whole cart is wrong, as it is calculated based on 29.07
  14. You don't have to change all filesystem calls (most of them are cache-related and must stay in place) Only image related calls, and anything else related to user assets (product attachments and so on) Anything uploaded by the shop administrator from the backoffice
  15. AFAIK, Sylius. Yes, a module would be ok, but tons of override would be needed. In example, I don't see any hooks here: https://github.com/thirtybees/thirtybees/blob/1.0.x/classes/Image.php#L338 so an override is mandatory. I really have overrides. (in fact, overrides are disabled in all installation I manage)
  16. As wrote previously, is not a budget question. With your workaround, you have tons of useless procedure to follow just as workaround. With my solution, it's all automated. We are in 2018, your workaround is something like 90s
  17. Currently nothing. but on every order at least one PDF is made. You have to sync these...... What you described is a workaround, not a solution. I've opened this thread to propose a solution. Is not mandatory to use a cloud provider, as the Flysystem library also support local disk and local disk would be used as default, so you can still use your workflow. It's an addition allowing the usage of a real cloud provider (and not a VPS provider with hourly billing like DO/Vultr), non a replacement. If a TB administrator doesn't need anything of this, it won't be affected, as TB still continue to use the local filesystem exactly right now, but adding the Flysystem library allow any other administator to use a cloud storage by changing a couple of line in settings.inc.php EDIT: something like the following: ``` define('STORAGE', 'local'); // standard TB storage /* * Use Google Cloud Storage */ define('STORAGE', 'gcs'); define('STORAGE_USER', '') define('STORAGE_PASS '') define('STORAGE_ENDPOINT', '') ``` nothing more.
  18. Wow! What a massive amount of work to do only for 2 or 3 hours in a single day of the year. It's a simple command in heroku or a simple click in AWS/Google/Azure console. (or absolutely nothing if you enable the auto scale feature) (and when you are syncign between these 2 instances, some users will add contents and so on, so you have to sync again and again and again, ....)
  19. Do you make use of overrides ?
  20. @lesley said in Abstract filesystem layer: A properly configured site using thirty bees and the page cache system can handle several hundred users on a 2gb instance. To extrapolate, I tested a default thirty bees installation on a 2gb 1vcpu vultr server. It could hold 550 users before it started spiking wildly. But you have to consider a default installation has very few images and highly optimized images, and not very much css and js as well. You can' compare Vult/DigitalOcean to any other cloud provider. They use local SSD disks. Any other (real) cloud provider, like Azure, AWS or Google use network SSDs. To get the same IOPS you have on Vultr you have to spin up a multi gigabyte disk even if you only use 500MB And there are tons of drawback, one over all: planned maintenance. DO will shut down the instance, AWS/Google/Azure is transparent. They move your instance without downtime because disks are replicated. Try to move a DO Droplet with 100GB disk............... You'll get many minutes of downtime (we are DO customers) You do have to factor in a direct way for you to access the master server in the whole array. I just do not personally see this as an application problem since there are simple solutions that most all applications use. It would be adding another layer of complexity and slowness to the software for not really any gain. Sure, I imagine a few people would use it, but many more would use a properly designed network, since it would be faster and more stable. Ok, let's make an example. Try to scale up (with no downtime) a TB shop (160GB disk because they are fixed on DigitalOcean based on the instance size) during the black friday, when usually (the rest of the year) you have 20-30 concurrent users and during a black friday limited offer you'll have, at peak, 300-400-500 concurrent users for about 2 or 3 hours. Are you on Vultr/Digital Ocean ? Shutdown, grow the instance (but not grow the disk), power on. You had a downtime. a HUGE downtime as DO is migrating your droplet (and your disk) every time, to another server. And you have another (HUGE) downtime when scaling down. Plus the single VM hardware limit. Are you starting a new project on DO? You have to choose the VM based on your standard requirement. VM are fixed on disk, so if you need 8GB/2vCPU you also get a huge 160GB SSD disk, even if you only need 1GB). From now on, any operation like image grow/shrink will result in image migration (with 160GB to migrate via network) I know this because up to 3 month ago we were on DO and every time we had to scale we had 20-30 minutes of downtime. With AWS, Google, Azure, .... : just spin up a new (preemptible/spot) instance for 2 hours. Automatically without downtime. With heroku: "heroku ps:scale web=4" It's easy to get. Just replace occurrences of "copy($src, $dst)" with "$filesystem->copy($from, $to);" (obviously, this is just an example)
  21. @datakick said in Abstract filesystem layer: @gandalf I don't think we can consider module to be badly-developed just because it uses php filesystem api. I personally have created a few modules that would be impacted by this (their content would not be synchronized by default). Now, there exists thousands of modules out there compatible with ps16/thirtybees. We can't really expect their developers to update them all, or force them to use some fs abstraction layer. It's just too late for that. That said, I still thing application layer is not the right place for this. Especially since there are very few merchants that would actually use this kind of feature. If a TB administrator need to put a TB shop on cloud, it also know what this means and should look for modules that are using a filesystem abstraction api. You are not forced to use that.... In example, we will skip any modules that are using overrides or certain feature, because they won't be compatible with our system. It is a decision that should be taken on every shop, by a shop administrator. Some feature are not compatible with other feature. I think this is normal.
  22. @lesley said in Abstract filesystem layer: The easiest and best way I have found to handle the situation is to throw up a media server. Because if you are generating enough traffic to need to spin up another instance, you are generally getting close to saturating your port as well. Absolutely wrong. If you are on any cloud provider, you'll hit CPU/RAM/VPS limits much before than any network limit (that could be 10GbE or 100GbE, you can't know) There are really several different levels all of this can happen on. One thing you have to keep in mind as well is if you are load balancing thirty bees instances, they DO require you to use sticky sessions. In use sticky sessions, users will always access the same machine, so you will not run into issues of files being cached on one machine and the file not being created on the other machine. Cache is not an issue. If you are using a different form of load balancing, what you want to do is this. Create your site, create a logical storage block to handle the caches, img files, theme cache, and modules directory. Then create a mount point in the file system for your installation, where those directories are mounted in those locations. Then you want to package that as a machine instance that can be deployed over the network. This way, when your instances are deployed, they are preconfigured with the network storage mount points. If you are using a cloud environment (Heroku? Google Container Engine? Kubernets? Azure? ....) is not easy to have a shared filesystem to use as mount point (or have it at decent prices). And won't be possible if you are scaling the infrastructure across multiple regions. We could likely sit around and come up with a way to handle this on the application level, but there is not one way we could handle it where we would get close to the same performance of handling it on the OS level. Since it is about performance, we just need to stick with the most performant option. Performance are not an issue. we use the same system on many other project and performance are not an issue. Keep in mind that only "writes" must be directed to the cloud storage and writes are made only from the backoffice. When a file is uploaded on a cloud storage, getting that back from the cloud storage or from the instance disk doesn't change the performance, as the user still have to transfer it from the net. In addition, if you are using Amazon S3 or Google Cloud Storage, this storage is automatically cached on the cloud edge. It could be faster than hosting on TB itself. Yes, uploading a file would be a little bit slower, but in normal situation, uploads/writes are 1/10th or 1/100th than reads.
  23. @datakick for badly-developed third party modules is not a TB issue. A third party module could also use some overrides and break everything but this won't be a TB fault. By writing some good docs or specs you can tell any third party module to use the FS abtraction layer and not directly call the filesystem. Or TB could expose some methods (in example, a simple "image_upload") that should be used by any third party modules and so on. There are many ways......
  24. How do you scale TB on multiple servers, if user assets are stored on a single instances? If you have to spin up one or more instances on peak hours, you have to keep all users uploads in Sync. With my proposal you don't have to sync anything as everything Is already usabile from all instances But based on your responses, i think you don't have clear what 12factor is or what horizontallly scaling means... Media server are totally useless in this case. They are a different things for a totally different purposes
  25. To properly scale-out horizontally TB, we need to use an external storage, like Amazon S3, Google Cloud Storage. There is a very good library that abstract the filesystem layer automatically: http://flysystem.thephpleague.com/docs/ What do you think in adding this support to TB? The same library can also be used to store file locally like in a standard TB. If anyone is interested in this and there are chance to add this feature in TB, we can develop it on our own and create a pull-request on github.
×
×
  • Create New...