Jump to content
thirty bees forum
  • 0

One redis, many shops


Havouza

Question

20 answers to this question

Recommended Posts

  • 0

But I doubt that is all true. I have two instances of redis running on one of the servers. The second one is used to speed up the saving of Piwik results in the db. In that case it is strongly advised to NOT use the redis for something else. I just thought that the cache from two different shops probably look totally different

Link to comment
Share on other sites

  • 0

@Havouza I bet "high speed" means that the shared hosting account dedicates up to 1~2 Gigs of memory and 1 to 2 cores. I can do that on cpanel servers with cloudlinux. And those VPS with 2G in USD runs around $6.99/mo

CloudLinux you can tell it that this account can have an input/output value of XXX megabytes per second, use 1 to 32 cores and dedicate xxx amount of memory and have xxx processes running at the same time. Gives me so many more options for hosting options to customers.

Link to comment
Share on other sites

  • 0

@Havouza You don't. But there is one feature I like in CloudLinux, chroot every account into its own environment. It doesn't share the same memory locations on the motherboard. The relevant operating system files are copied over to each account. The CageFS cache requires around 4 Gigabytes on a fast SSD drive partition.

You are right, though! Noisey neighbors on a server that has VPS accounts can still disrupt the others that have higher input/output on the CPU bus.

Link to comment
Share on other sites

  • 0

@Havouza said in One redis, many shops:

Cgroups

Interesting! I had never heard of that linux kernel module, before. https://sysadmincasts.com/episodes/14-introduction-to-linux-control-groups-cgroups seems rather detailed and yah no GUI! haha! CloudLinux has GUI and graphs. I can tweak values over time to levelize averages for various hosting package values. I can even extend the cPanel package file to set values for CloudLinux and that shows up then when I edit a package with all its form fields.

Link to comment
Share on other sites

  • 0

@Havouza said in One redis, many shops:

Something came up that I have not really thought about. I have 3 tb shops on the same server. All use redis cache. If I understand it they use the same instance of redis. Is this not a problem?

It really depends on how busy the shops are, and what your server resources are.

Redis is capable of caching multiple databases within one Redis instance and this will not cause any sort of data corruption or cache problems.

However Redis is not multithreaded so caching multiple 30bz databases within a single Redis instance will not scale well if the sites are busy. In this case it is better to run one Redis instance per site with each instance on its own port.

On the other hand running multiple Redis instances requires more memory and creates additional system resource overhead, which can be a problem on smaller servers such as a small VPS. So if you have multiple sites that are not getting a lot of traffic then using a single instance of Redis is going to be the better (and perhaps only) option.

Link to comment
Share on other sites

  • 0

@dprophitjr said in One redis, many shops:

@dynambee So 1 Gig instance dedicated to Redis per site?

I don't think Redis will use that much memory per instance, at least not for most 30bz sites. They have some details about memory usage here.

Edit: Also it looks like the additional memory overhead per redis instance is only about 1MB. If this is accurate then the amount of memory used for a single Redis instance caching 5 databases and five separate Redis instances caching those same 5 databases shouldn't be more than a few MB different. This is likely too simple an example, but Redis doesn't seem to have a memory footprint too much bigger than the data it is caching.

Link to comment
Share on other sites

  • 0

I just read more of the Redis FAQ I linked to above and came across these bits:

"For instance, using pipelining Redis running on an average Linux system can deliver even 500k requests per second, so if your application mainly uses O(N) or O(log(N)) commands, it is hardly going to use too much CPU."

and

"Redis can handle up to 2^32 keys, and was tested in practice to handle at least 250 million keys per instance. Every hash, list, set, and sorted set, can hold 2^32 elements. In other words your limit is likely the available memory in your system."

As long as Redis is properly configured & has enough memory available I can't see a bunch of 30bz sites coming remotely close to maxing out even a single Redis instance. Something else will become a bottleneck long before Redis will.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...