title: "Deployment"
A Vendure application is essentially a Node.js application, and can be deployed to any environment that supports Node.js.
The bare minimum requirements are:
A typical pattern is to run the Vendure app on the server, e.g. at http://localhost:3000 an then use nginx as a reverse proxy to direct requests from the Internet to the Vendure application.
Here is a good guide to setting up a production-ready server for an app such as Vendure: https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-18-04
For a production Vendure server, there are a few security-related points to consider when deploying:
By default, Vendure uses auto-increment integer IDs as entity primary keys. While easier to work with in development, sequential primary keys can leak information such as the number of orders or customers in the system. For this reason you should consider using the UuidIdStrategy for production.
import { UuidIdStrategy, VendureConfig } from '@vendure/core';
export const config: VendureConfig = {
entityIdStrategy: new UuidIdStrategy(),
// ...
}
Consider using helmet as middleware (add to the apiOptions.middleware array) to handle security-related headers.
Vendure supports running in a serverless or multi-instance (horizontally scaled) environment. The key consideration in configuring Vendure for this scenario is to ensure that any persistent state is managed externally from the Node process, and is shared by all instances. Namely:
When using cookies to manage sessions, make sure all instances are using the same cookie secret:
const config: VendureConfig = {
authOptions: {
cookieOptions: {
secret: 'some-secret'
}
}
}
Channel and Zone data gets cached in-memory as this data is used in virtually every request. The cache time-to-live defaults to 30 seconds, which is probably fine for most cases, but it can be configured in the EntityOptions.
If you wish to deploy with Kubernetes or some similar system, you can make use of the health check endpoints.
This is a regular REST route (note: not GraphQL), available at /health.
REQUEST: GET http://localhost:3000/health
{
"status": "ok",
"info": {
"database": {
"status": "up"
}
},
"error": {},
"details": {
"database": {
"status": "up"
}
}
}
Health checks are built on the Nestjs Terminus module. You can also add your own health checks by creating plugins that make use of the HealthCheckRegistryService.
Although the worker is not designed as an HTTP server, it contains a minimal HTTP server specifically to support HTTP health checks. To enable this, you need to call the startHealthCheckServer() method after bootstrapping the worker:
bootstrapWorker(config)
.then(worker => worker.startJobQueue())
.then(worker => worker.startHealthCheckServer({ port: 3020 }))
.catch(err => {
console.log(err);
});
This will make the /health endpoint available. When the worker instance is running, it will return the following:
REQUEST: GET http://localhost:3020/health
{
"status": "ok"
}
{{< alert >}}
Note: there is also an internal health check mechanism for the worker, which does not uses HTTP. This is used by the server's own health check to verify whether at least one worker is running. It works by adding a check-worker-health job to the JobQueue and checking that it got processed.
{{< /alert >}}
If you have customized the Admin UI with extensions, it can make sense to compile your extensions ahead-of-time as part of the deployment process.