With GPUonCLOUD PaaS, hosting of your applications becomes truly flexible. In addition to automatic vertical scaling, GPUonCLOUD also lets you increase/decrease the number of servers in your environment manually or automatically.

The process of manual scaling is fairly simple – open the environment topology wizard and use the appropriate “+” and “” buttons in its central pane to state the required number of nodes for the selected server.

Additionally, starting with the 5.5 platform version, the preferred scaling mode can be selected for new environments during creation and adjusted for existing ones through the topology wizard:

 

  • Stateless – simultaneously creates all new nodes from the base image template

  • Stateful – sequentially copies file system of the master container into the new nodes

The first option is comparatively faster, while the second one automatically copies all custom configurations (e.g. deployments or Custom SSL).


Tips:

  • by default, the load balancer, application server, and VPS stacks are configured to use stateful mode, while others – stateless

  • when configuring a custom JPS package, the preferable scaling mode can be defined with the scalingMode parameter

  • you can automate horizontal scaling based on incoming load with the help of tunable triggers

  • the transfer of custom files for the stateless mode can be done manually or configured via the Cloud Scripting automation (e.g. using the onBefore- and onAfterScaleOut events)

  • during the initial layer creation, all of the nodes are created simultaneously, even for the stateful mode (as no customization has been applied yet)

  • you can use the initial (master) node of the layer as your storage server for sharing data within the whole layer

  • in case of scaling in (i.e decreasing the nodes number), the last container added to the layer is the first one to be removed

The maximum number of the same-type servers within a single environment layer depends on a particular hosting provider settings (usually this limit stands for 16 nodes). You can check the exact value within the Quotas & Pricing > Account Limits information frame.

All newly added servers are created at different hardware nodes, providing advanced reliability and high-availability.



Each environment node group (layer) is provided with the dedicated name, which, if needed, can be manually adjusted. In case there are several instances inside, layer name will be complemented with the xN label (where N is the actual nodes number).

Having several same-type nodes within a layer enables their synchronous management. Thus, all comprised containers can be simultaneously configured, inspected for logs and statistics, restarted or redeployed through the corresponding icons.

 <AHS1>


In order to operate with a particular container separately, expand the layer’s string to see the full list of its nodes. Each of these containers is an isolated instance, which has a unique
Node ID and can be accessed/configured apart from others. Herewith, the layer master node can be easily located due to the dedicated icon.

 <AHS2>

To facilitate interaction with numerous servers of the same type, GPUonCLOUD also allows marking a particular node with the appropriate label, e.g. to define master and slave instances in a DB cluster.

Just double-click at the default Node ID: xxx value (or hover over it to reveal a special pencil icon) and specify the desired alternative name.

  < AHS 3>

More information on this labeling feature can be found in the Environment Aliases document.

While scaling different types of stacks, consider the following specifics:

  • upon scaling the application server instance, the load balancer node will be automatically added to the environment topology

  • if enabling the high-availability option for the application server, the obligatory required  NGINX load balancer cannot be scaled horizontally (if several nodes of NGINX were available before, they will be automatically downscaled to a single instance)

  • upon scaling VPS nodes, each one is provided with a separate Public IP address attached

  • Maven is the only node, which cannot be scaled horizontally (as there is no point in such operation)

Now, you know how easy it is to horizontally scale instances in GPUonCLOUD PaaS and aware of the operation specifics. Also, feel free to configure an automatic nodes scaling to smoothly overcome high load spikes without overpaying for unused resources.