Jump to content
thirty bees forum

Abstract filesystem layer


gandalf

Recommended Posts

To properly scale-out horizontally TB, we need to use an external storage, like Amazon S3, Google Cloud Storage. There is a very good library that abstract the filesystem layer automatically: http://flysystem.thephpleague.com/docs/

What do you think in adding this support to TB? The same library can also be used to store file locally like in a standard TB.

If anyone is interested in this and there are chance to add this feature in TB, we can develop it on our own and create a pull-request on github.

Link to comment
Share on other sites

thirty bees supports external media servers already, see back office -> Advanced Preferences -> Performance, panel Media servers. The database can be on a remote server as well. Server side caching too, using a Redis server. The only set of files I'm aware of, which can not be put on an external server, is upload/ (files for customizing products) and download/ (downloadable products). PHP code it's self can't be put on an external server, with or without an additional filesystem layer.

With this in mind I see no substantial opportunities to abstract something, other than filling a buzzword. Maybe I'm missing something.

Link to comment
Share on other sites

Sure the php code can be placed on different servers, if done behind a geo-load balancer. The database structure also supports multiple slave databases as well. Here is a good article about how you can scale horizontally with thirty bees, https://thirtybees.com/blog/thirty-bees-scalability/

Link to comment
Share on other sites

How do you scale TB on multiple servers, if user assets are stored on a single instances?

If you have to spin up one or more instances on peak hours, you have to keep all users uploads in Sync. With my proposal you don't have to sync anything as everything Is already usabile from all instances

But based on your responses, i think you don't have clear what 12factor is or what horizontallly scaling means...

Media server are totally useless in this case. They are a different things for a totally different purposes

Link to comment
Share on other sites

@gandalf Application is not the right layer that should address this. In fact, it's not even possible with open system like thirtybees. We could fix all places in core that touches filesystem and change it to use this filesystem abstraction layer, but what about third party modules? Any module can use php filesystem functions directly, so this abstraction would be bypassed anyway. You would end up much worse -- some assets would be correctly synchronized, while others would not. And there's nothing thirtybees can do here -- we can't just force module developers to use our filesystem abstraction.

If you want to sync files, you need to do that on different / lower level. Either at php server (php module that tweaks filesystem api), or at OS level (map thirtybees installation over distributed virtual volumes).

Link to comment
Share on other sites

The easiest and best way I have found to handle the situation is to throw up a media server. Because if you are generating enough traffic to need to spin up another instance, you are generally getting close to saturating your port as well.

There are really several different levels all of this can happen on. One thing you have to keep in mind as well is if you are load balancing thirty bees instances, they DO require you to use sticky sessions. In use sticky sessions, users will always access the same machine, so you will not run into issues of files being cached on one machine and the file not being created on the other machine.

If you are using a different form of load balancing, what you want to do is this. Create your site, create a logical storage block to handle the caches, img files, theme cache, and modules directory. Then create a mount point in the file system for your installation, where those directories are mounted in those locations. Then you want to package that as a machine instance that can be deployed over the network. This way, when your instances are deployed, they are preconfigured with the network storage mount points.

We could likely sit around and come up with a way to handle this on the application level, but there is not one way we could handle it where we would get close to the same performance of handling it on the OS level. Since it is about performance, we just need to stick with the most performant option.

Link to comment
Share on other sites

@datakick for badly-developed third party modules is not a TB issue. A third party module could also use some overrides and break everything but this won't be a TB fault. By writing some good docs or specs you can tell any third party module to use the FS abtraction layer and not directly call the filesystem. Or TB could expose some methods (in example, a simple "image_upload") that should be used by any third party modules and so on.

There are many ways......

Link to comment
Share on other sites

@lesley said in Abstract filesystem layer:

The easiest and best way I have found to handle the situation is to throw up a media server. Because if you are generating enough traffic to need to spin up another instance, you are generally getting close to saturating your port as well.

Absolutely wrong. If you are on any cloud provider, you'll hit CPU/RAM/VPS limits much before than any network limit (that could be 10GbE or 100GbE, you can't know)

There are really several different levels all of this can happen on. One thing you have to keep in mind as well is if you are load balancing thirty bees instances, they DO require you to use sticky sessions. In use sticky sessions, users will always access the same machine, so you will not run into issues of files being cached on one machine and the file not being created on the other machine.

Cache is not an issue.

If you are using a different form of load balancing, what you want to do is this. Create your site, create a logical storage block to handle the caches, img files, theme cache, and modules directory. Then create a mount point in the file system for your installation, where those directories are mounted in those locations. Then you want to package that as a machine instance that can be deployed over the network. This way, when your instances are deployed, they are preconfigured with the network storage mount points.

If you are using a cloud environment (Heroku? Google Container Engine? Kubernets? Azure? ....) is not easy to have a shared filesystem to use as mount point (or have it at decent prices).

And won't be possible if you are scaling the infrastructure across multiple regions.

We could likely sit around and come up with a way to handle this on the application level, but there is not one way we could handle it where we would get close to the same performance of handling it on the OS level. Since it is about performance, we just need to stick with the most performant option.

Performance are not an issue. we use the same system on many other project and performance are not an issue. Keep in mind that only "writes" must be directed to the cloud storage and writes are made only from the backoffice. When a file is uploaded on a cloud storage, getting that back from the cloud storage or from the instance disk doesn't change the performance, as the user still have to transfer it from the net. In addition, if you are using Amazon S3 or Google Cloud Storage, this storage is automatically cached on the cloud edge. It could be faster than hosting on TB itself.

Yes, uploading a file would be a little bit slower, but in normal situation, uploads/writes are 1/10th or 1/100th than reads.

Link to comment
Share on other sites

@gandalf I don't think we can consider module to be badly-developed just because it uses php filesystem api. I personally have created a few modules that would be impacted by this (their content would not be synchronized by default).

Now, there exists thousands of modules out there compatible with ps16/thirtybees. We can't really expect their developers to update them all, or force them to use some fs abstraction layer. It's just too late for that.

That said, I still thing application layer is not the right place for this. Especially since there are very few merchants that would actually use this kind of feature.

Link to comment
Share on other sites

I am going to have to disagree. Most cloud providers port out at 1gps global. This is the best I can find on AWS for their outgoing port speed, https://aws.amazon.com/ec2/instance-types/

Internally it is a different matter, https://aws.amazon.com/blogs/aws/the-floodgates-are-open-increased-network-bandwidth-for-ec2-instances/ but again, this is just for internal traffic.

A properly configured site using thirty bees and the page cache system can handle several hundred users on a 2gb instance. To extrapolate, I tested a default thirty bees installation on a 2gb 1vcpu vultr server. It could hold 550 users before it started spiking wildly. But you have to consider a default installation has very few images and highly optimized images, and not very much css and js as well.

Still a default thirty bees installation is about 1mb on the home page. So with 400 concurrent users per second on the home page you will logically be sending out 400MBps of data. That translates to 3.5gbps of data, which is what an c5 large instance can hold. Since they have faster processors and more memory for caching, and also more vcpu's I would venture to guess that the load capacity also increases from the 550 users that a 2gb 1vcpu instance a vultr server can handle. But at 400 users you are still hitting a port saturation and starting to queue requests. This is where you need the media server and mounted logical drives.

That being said, you raise another point about regional replication and how mounting across regions is an issue. This is actually a problem that solves itself in network architecture. If you are replicating to another reason, why are you? The general always right answer is because you are wanting to load a version of the site from a server closest to your clients. Your load balancer will essentially create an isolated point where the file systems do not need to interact on any level for caches. The media servers will handle all of the front end serving for static resources, and everything will be homogeneous.

You do have to factor in a direct way for you to access the master server in the whole array. I just do not personally see this as an application problem since there are simple solutions that most all applications use. It would be adding another layer of complexity and slowness to the software for not really any gain. Sure, I imagine a few people would use it, but many more would use a properly designed network, since it would be faster and more stable.

Link to comment
Share on other sites

@datakick said in Abstract filesystem layer:

@gandalf I don't think we can consider module to be badly-developed just because it uses php filesystem api. I personally have created a few modules that would be impacted by this (their content would not be synchronized by default).

Now, there exists thousands of modules out there compatible with ps16/thirtybees. We can't really expect their developers to update them all, or force them to use some fs abstraction layer. It's just too late for that.

That said, I still thing application layer is not the right place for this. Especially since there are very few merchants that would actually use this kind of feature.

If a TB administrator need to put a TB shop on cloud, it also know what this means and should look for modules that are using a filesystem abstraction api.

You are not forced to use that.... In example, we will skip any modules that are using overrides or certain feature, because they won't be compatible with our system. It is a decision that should be taken on every shop, by a shop administrator.

Some feature are not compatible with other feature. I think this is normal.

Link to comment
Share on other sites

@lesley said in Abstract filesystem layer:

A properly configured site using thirty bees and the page cache system can handle several hundred users on a 2gb instance. To extrapolate, I tested a default thirty bees installation on a 2gb 1vcpu vultr server. It could hold 550 users before it started spiking wildly. But you have to consider a default installation has very few images and highly optimized images, and not very much css and js as well.

You can' compare Vult/DigitalOcean to any other cloud provider. They use local SSD disks. Any other (real) cloud provider, like Azure, AWS or Google use network SSDs. To get the same IOPS you have on Vultr you have to spin up a multi gigabyte disk even if you only use 500MB And there are tons of drawback, one over all: planned maintenance. DO will shut down the instance, AWS/Google/Azure is transparent. They move your instance without downtime because disks are replicated.

Try to move a DO Droplet with 100GB disk............... You'll get many minutes of downtime (we are DO customers)

You do have to factor in a direct way for you to access the master server in the whole array. I just do not personally see this as an application problem since there are simple solutions that most all applications use. It would be adding another layer of complexity and slowness to the software for not really any gain. Sure, I imagine a few people would use it, but many more would use a properly designed network, since it would be faster and more stable.

Ok, let's make an example. Try to scale up (with no downtime) a TB shop (160GB disk because they are fixed on DigitalOcean based on the instance size) during the black friday, when usually (the rest of the year) you have 20-30 concurrent users and during a black friday limited offer you'll have, at peak, 300-400-500 concurrent users for about 2 or 3 hours. Are you on Vultr/Digital Ocean ? Shutdown, grow the instance (but not grow the disk), power on. You had a downtime. a HUGE downtime as DO is migrating your droplet (and your disk) every time, to another server. And you have another (HUGE) downtime when scaling down. Plus the single VM hardware limit.

Are you starting a new project on DO? You have to choose the VM based on your standard requirement. VM are fixed on disk, so if you need 8GB/2vCPU you also get a huge 160GB SSD disk, even if you only need 1GB). From now on, any operation like image grow/shrink will result in image migration (with 160GB to migrate via network) I know this because up to 3 month ago we were on DO and every time we had to scale we had 20-30 minutes of downtime.

With AWS, Google, Azure, .... : just spin up a new (preemptible/spot) instance for 2 hours. Automatically without downtime. With heroku: "heroku ps:scale web=4"

It's easy to get. Just replace occurrences of "copy($src, $dst)" with "$filesystem->copy($from, $to);" (obviously, this is just an example)

Link to comment
Share on other sites

@gandalf Since I am some what a developer I look at a problem like that different than a store owner would look at it. First, I would say the store owner needs a developer to handle it. The simple reason is do it yourself solutions do not work the same as professionally done solutions.

But to answer your question, this is what I would do for a client. In the week or two before blackfriday I would deploy a media server, just to take strain off the server. This leaves the main site core server only serving php / ajax requests and drops the traffic to virtually nothing on the main machine. Then I would spin up a load balancer. Once the load balancer is spun up, I would replicate the current live instance to a new larger instance. That should only take a few hours to do. Once that is done, I would just go ahead and change the settings.inc.php file on the new instance to point to the database on the old instance. This way we have 2 servers running one site. Then I would set the load balancer to send all traffic to the new instance, clear all the caches, and have 0 down time.

In the end, I would end up with a bigger front end server, with another smaller server powering the database. I would load test it just to make sure it can exceed the required load in the days coming up to the sale.

Link to comment
Share on other sites

Wow! What a massive amount of work to do only for 2 or 3 hours in a single day of the year. It's a simple command in heroku or a simple click in AWS/Google/Azure console. (or absolutely nothing if you enable the auto scale feature)

(and when you are syncign between these 2 instances, some users will add contents and so on, so you have to sync again and again and again, ....)

Link to comment
Share on other sites

Currently nothing. but on every order at least one PDF is made. You have to sync these...... What you described is a workaround, not a solution. I've opened this thread to propose a solution. Is not mandatory to use a cloud provider, as the Flysystem library also support local disk and local disk would be used as default, so you can still use your workflow.

It's an addition allowing the usage of a real cloud provider (and not a VPS provider with hourly billing like DO/Vultr), non a replacement.

If a TB administrator doesn't need anything of this, it won't be affected, as TB still continue to use the local filesystem exactly right now, but adding the Flysystem library allow any other administator to use a cloud storage by changing a couple of line in settings.inc.php

EDIT:

something like the following:

``` define('STORAGE', 'local'); // standard TB storage

/* * Use Google Cloud Storage */ define('STORAGE', 'gcs'); define('STORAGE_USER', '') define('STORAGE_PASS '') define('STORAGE_ENDPOINT', '') ```

nothing more.

Link to comment
Share on other sites

Invoices can and should be generated on demand. So that problem is solved with a flick of a button.

Honestly, I cannot see this ever making it into the core. There are just so many better ways to do this. When you grow to the size that you need this, its something you need to budget for.

Link to comment
Share on other sites

AFAIK, Sylius.

Yes, a module would be ok, but tons of override would be needed. In example, I don't see any hooks here: https://github.com/thirtybees/thirtybees/blob/1.0.x/classes/Image.php#L338 so an override is mandatory.

I really have overrides. (in fact, overrides are disabled in all installation I manage)

Link to comment
Share on other sites

Just to get an idea about how much work this is, I grepped through the sources for a couple of typical file system calls: none $ for F in file_exists file_get_contents file_put_contents file filemtime fopen is_dir is_file is_readable is_writeable mkdir; do echo -n "${F}(): "; grep -r "\b${F}(" | wc -l; done file_exists(): 3402 file_get_contents(): 930 file_put_contents(): 568 file(): 273 filemtime(): 285 fopen(): 827 is_dir(): 764 is_file(): 341 is_readable(): 147 is_writeable(): 53 mkdir(): 478 That's like 8000 calls to filesystem functions, which all need a change, then.

Link to comment
Share on other sites

You don't have to change all filesystem calls (most of them are cache-related and must stay in place)

Only image related calls, and anything else related to user assets (product attachments and so on)

Anything uploaded by the shop administrator from the backoffice

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...