Basic
Last updated
Last updated
This guide will cover how to perform common DoltLab administrator configuration and tasks for the latest versions of DoltLab, >= v2.1.0
. These versions use the binary included in DoltLab's .zip
file. For instructions on running DoltLab in Enterprise mode and configuring exclusive Enterprise features, see the . If you're using an older version of DoltLab that does not include the , please see the .
To backup DoltLab's remote data, the database data for all database on a given DoltLab instance, leave DoltLab's services up and run:
This will create a tar file called remote-data.tar
in your working directory.
To backup user uploaded files, run:
This will create a tar file called user-uploaded-data.tar
in your working directory.
To backup Dolt server data, run:
Before restoring DoltLab's volumes from a backup, first, stop the running DoltLab services, prune
the Docker containers, and remove the old volume(s):
Once the services are stopped, cd
into the directory containing the remote-data.tar
backup file and run:
To restore user uploaded data, cd
into the directory containing user-uploaded-data.tar
and run:
To restore Dolt server root data, cd
into the directory containing doltlabdb-root.tar
and run:
To restore Dolt server config data, cd
into the directory containing doltlabdb-configs.tar
and run:
To restore Dolt server data, cd
into the directory containing doltlabdb-data.tar
and run:
To restore Dolt server local backup data, cd
into the directory containing doltlabdb-backups.tar
and run:
You can now restart DoltLab, and should see all data restored from the tar
files.
dolt backup
commandNext, add a local backup using the DOLT_BACKUP()
stored procedure. By default, DoltLab uses a Docker volume backed by the host's disk that allows you to create backups of the Dolt server. These backups will be located at /backups
from within the Dolt server container. To create persistent backups, simply use /backups
as the path prefix to the backup names:
The above snippet will create a new backup stored at /backups/dolthubapi/2023/06/01
within the Dolt server container, and persisted to the host using the Docker volume doltlab_doltlabdb-dolt-backups
.
You can sync the backup with the sync
command:
The local backup is now synced, and you can now disconnect the shell.
At the time of this writing, Dolt only supports restoring backups using the CLI. To restore the Dolt server from a local backup, stop DoltLab's services using ./stop.sh
.
Delete the existing ./dolthubapi
directory located at /var/lib/dolt
from within this container:
The database has now been successfully restored, and you can now restart DoltLab.
You can find the location where Docker writes a service's logs by inspecting the LogPath
of the service.
Below is a list of data logged by DoltLab's various services. As of DoltLab v2.3.10 DoltLab's logging is not configurable via the installer, but this may change.
This is DoltLab's application Dolt server.
DoltLab runs the server at log-level=debug
, which writes full database queries to the server logs. The logs at this level contain sensitive information, including insert statements executed against the server, session information, and end-user information. These logs should not be shared with third parties.
DoltLab's Remote API gRPC service manages access to remote data. This service uses gRPC middleware to log the following information on all ingress requests:
gRPC Method
gRPC Code
Tracing Request ID
User Agent
Panic stack if one occurs.
The Remote API service itself logs the following:
Data conflicts if detected.
S3 Bucket names (If AWS Cloud backed storage is configured).
S3 Object Keys (If AWS Cloud backed storage is configured).
File size and S3 UploadPart numbers (If AWS Cloud backed storage is configured).
DoltLab usernames, display names, and email addresses.
Internal deployment IDs.
Internal repository IDs.
Hashes of downloaded chunks.
Upload URL where chunks will be uploaded.
Repository token root hash.
Commit hashes.
DoltLab's Remote API also runs a background process that serves repository data on a different port, but
writes logs to Stdout of doltlabremoteapi
. This is an HTTP service and it logs the following information
for all ingress requests:
Http method
Http URL
Request timing information
DoltLab's Main API, which is a gRPC service. This service uses gRPC middleware to log the following information on all ingress requests:
gRPC Method
gRPC Code
Tracing Request ID
User Agent
Panic stack if one occurs.
DoltLab API itself logs the following:
The number of request headers and token prefix used on ingress repository authentication requests.
The username of the default DoltLab user, often admin
.
Internal end-user session IDs.
Internal end-user API token IDs.
Internal webhook IDs.
Database table column tag names, during some errors.
The full logs of any DoltLab Job that resulted in an error. Because DoltLab Jobs are cleaned up after they complete, destroying the logs, the logs, in the event of a failure, are written to the DoltLab API logs so they are persisted for debugging.
DoltLab currently supports four jobs, "file import," "pull request merge," "large query," and "continuous integration" jobs. Each of these Jobs runs outside the main DoltLab API process. These Job run Dolt binaries in order to perform tasks and their logs contain the following information:
Dolt CLI Stderr output if the command has written to stderr.
Repository owner.
Repository owner email address (pull request merge Job, large query Job).
Repository name.
Repository branches (only those relevant to the Job's task).
Internal operation IDs.
AWS SDK errors (if AWS cloud backed storage is configured).
S3 Bucket names (if AWS cloud backed storage is configured).
S3 Object Keys (if AWS cloud backed storage is configured).
Internal user IDs.
Repository commits (only those relevant to the Job's task).
Key/path of a file stored by doltlabfileserviceapi
(file import Job).
Repository table name (file import Job).
Original uploaded file name (file import Job).
Pull request merge commit message (pull request merge Job).
Repository query (large query Job).
dolt sql-server
logs at the default log-level (continuous integration Job).
Saved query name (continuous integration Job).
Saved query value (continuous integration Job).
DoltLab File Service API manages user upload files when cloud backed storage is not configured. This is an HTTP service, and it logs the following information for all ingress requests:
Http method
Http URL
Request timing information
DoltLab GraphQL API is the data API for DoltLab's frontend UI. This service only logs errors returned by doltlabapi
.
DoltLab UI is a react application. It does not log any information in the server, and only logs a internal "ResourceType" name in the client on a particular error.
If you need to send service logs to the DoltLab team, first locate the logs on the host using the docker inspect
command, then cp
the logs to your working directory:
Next, change permissions on the copied file to enable reads by running:
Finally, download the copied log file from your DoltLab host using scp
. You can then send this and any other log files to the DoltLab team member you're working with via email.
Running the dolt login
command will open your browser window to the --login-url
with credentials populated in the "Public Key" field. Simply add a "Description" and click "Create", then return to your terminal to see your Dolt client successfully authenticated.
Next, login to your DoltLab account, click your profile image, then click "Settings" and then "Credentials".
Paste the public key into the "Public Key" field, write a description in the "Description" field, then click "Create".
Your Dolt client is now authenticated for this DoltLab account.
The metrics for these services are available at endpoints corresponding to each service's container name. For DoltLab's Remote API, thats :7770/doltlabremoteapi
, and for DoltLab's Main API that's :7770/doltlabapi
.
To make these endpoints available to Prometheus, open port 7770
on your DoltLab host.
Run cAdvisor
as a Docker container in daemon mode with:
To run a Prometheus server on your DoltLab host machine, first open port 9090
on the DoltLab host. Then, write the following prometheus.yml
file on the host:
Then, start the Prometheus server as a Docker container running in daemon mode:
--add-host host.docker.internal:host-gateway
is only required if you are running the Prometheus server on your DoltLab host. If its running elsewhere, this argument may be omitted, and the host.docker.internal
hostname in prometheus.yml
can be changed to the hostname of your DoltLab host.
DoltLab supports explicit email whitelisting to prevent account creation by unauthorized users.
Use this script by supplying the DOLT_PASSWORD
you used to start your DoltLab instance. Run:
You will see a mysql>
prompt connected to DoltLab' application Dolt database.
Execute the following INSERT
to allow the user with example@address.com
to create an account on your DoltLab instance:
In the external Dolt database, prior to connecting your DoltLab instance, run the following SQL statements:
When you restart your instance it should now be connected to your external Dolt database.
DoltLab Jobs are stand-alone, long-running Docker containers that perform specific tasks for DoltLab users behind the scenes.
As a result, DoltLab may consume additional memory and disk, depending on the number of running Jobs and their workload.
By default, DoltLab collects first-party metrics for deployed instances. We use DoltLab's metrics to determine how many resources to allocate toward its development and improvement.
Let's say we've have set up and run an EC2 instance with the latest version of DoltLab and have successfully configured its Security Group to allow ingress traffic on 80
, 100
, 4321
, and 50051
. By default, this host will have a public IP address assigned to it, but this IP is unstable and will change whenever the host is restarted.
Your DoltLab host should now be accessible via your new domain name. You can now stop your DoltLab server and update the host
field in your ./installer_config.yaml
.
Restart your DoltLab instance with ./start.sh
.
In the event you are configuring your domain name with an Elastic Load Balancer, ensure that it specifies Target Groups for each of the ports required to operate DoltLab, 80
, 100
, 4321
, and 50051
.
To configure a DoltLab to use a Hosted Dolt, follow the steps below as we create a sample DoltLab Hosted Dolt instance called my-doltlab-db-1
.
You will then see a form where you can specify details about the host you need for your DoltLab instance:
In the image above you can see that we defined our Hosted Dolt deployment name as my-doltlab-db-1
, selected an AWS EC2 host with 2 CPU and 8 GB of RAM in region us-west-2
. We've also requested 200 GB of disk. For DoltLab, these settings should be more than sufficient.
We have also requested a replica instance by checking the "Enable Replication" box, and specifying 1
replica, although replication is not required for DoltLab.
You will see the hourly cost of running the Hosted Dolt instance displayed above the "Create Deployment" button. Click it, and wait for the deployment to reach the "Started" state.
Once the deployment has come up, the deployment page will display the connection information for both the primary host and the replica, and each will be ready to use. Before connecting a DoltLab instance to the primary host, though, there are a few remaining steps to take to ensure the host has the proper state before connecting DoltLab.
First, click the "Configuration" tab and uncheck the box "behavior_disable_multistatements". DoltLab will need to execute multiple statements against this database when it starts up. You can also, optionally, change the log_level to "debug". This log level setting will make sure executed queries appear in the database logs, which is helpful for debugging.
Click "Save Changes".
Next, navigate to the "Workbench" tab and check the box "Enable Writes". This will allow you to execute writes against this instance from the SQL workbench. Click "Update".
Then, with writes enabled, on this same page, click "Create database" to create the database that DoltLab expects, called dolthubapi
.
Finally, create the required users and grants that DoltLab requires by connecting to this deployment and running the following statements:
You can do this by running these statements from the Hosted workbench SQL console, or by connecting to the database using the mysql client connection command on the "Connectivity" tab, and executing these statements from the SQL shell.
This instance is now ready for a DoltLab connection.
To connect DoltLab to my-doltlab-db-1
, ensure that your DoltLab instance is stopped.
Next, edit the services.doltlabdb.host
, services.doltlabdb.port
, and services.doltlabdb.tls_skip_verify
fields of the installer_config.yaml
.
It is possible to limit the number of concurrent Jobs running on a DoltLab host, which might be starving the host for resources and affecting DoltLab's performance.
When users upload files on a DoltLab instance, or merge a pull request, DoltLab creates a Job corresponding to this work. These Jobs spawn new Docker containers that performs the required work.
By default, DoltLab imposes no limit to the number of concurrent Jobs that can be spawned. As a result, a DoltLab host might experience resources exhaustion as the Docker engine uses all available host resources for managing it's containers.
To limit concurrent Jobs, edit ./installer_config.yaml
to contain the following:
If this instance should only be accessible by the NLB, ensure that the DoltLab host is created in a private subnet and does not have public IP address.
After setting up your DoltLab host, edit the host's inbound security group rules to allow all traffic on ports: 80
, 100
, 4321
, 50051
, and 2001
.
Because the host is in a private subnet with no public IP though, only the NLB will be able to connect to the host on these ports.
When creating the target groups, select Instances
as the target type. Then, select TCP
as the port protocol, followed by the port to use for the target group. In this example we will map all target group ports to their corresponding DoltLab port, ie 80:80
, 100:100
, 4321:4321
and 50051:50051
. Select the same VPC used by your DoltLab host as well.
During target group creation, in the Health Checks
section, click Advanced health check settings
and select Override
to specify the port to perform health checks on. Here, enter 2001
, the health check port for DoltLab's Envoy proxy, doltlabenvoy
. We will use this same port for all target group health checks.
After clicking Next
, you will register targets for your new target group. Here you should see your DoltLab host. Select it and specify the port the target group will forward to.
Click Include as pending below
, then click Create target group
.
Once you've created your target groups you can create the NLB.
Be sure to select the Network Load balancer as the other types of load balancers may require different configurations.
Then, create an NLB in the same VPC and subnet as your DoltLab host that uses Scheme: Internet-facing
and Ip address type: IPV4
.
Additionally, select the the same availability zone that your DoltLab host uses. You can use the default
security group for your NLB, however the ingress rules for this group will need to be updated before inbound traffic will be able to reach your NLB.
In the Listeners section, add listeners for each target group you created, specifying the NLB port to use for each one. But again, in this example we will forward on the same port. Click Create load balancer
.
It make take a few minutes for the NLB to become ready. After it does, check each target group you created and ensure they are all healthy.
Next, edit the inbound rules for the security group attached to the NLB you created so that it allows connections on the listening ports.
On the NLB page you should now see the DNS name of your NLB which can be used to connect to your DoltLab instance.
If DoltLab has never been started before on the host using the start.sh
script, the passwords for its application database doltlabdb
can be updated simply by editing their value in the installer_config.yaml
, and then running the installer
.
If DoltLab has been started before, then its application database has been initialized already, and has existing passwords for the SQL users dolthubadmin
and dolthubapi
. Changing the passwords in this instance requires DoltLab to be running, so that you can connect to the appliciation database doltlabdb
with a SQL shell.
Ensure that DoltLab is running by executing the ./start.sh
script. Then, run ./doltlabdb/shell-db.sh
to open a SQL shell against doltlabdb
. You will see a prompt like:
Next, update the passwords for the users dolthubadmin
and dolthubapi
to the values of your choosing:
Close the SQL shell, and stop DoltLab with ./stop.sh
.
Next, update installer_config.yaml
to contain the new passwords you changed in the live database:
Finally, rerun the installer
to regenerate DoltLab's assets with the new password values.
Completing these steps ensures that the passwords are consistent on disk and in the assets generated by the installer
. You can now restart DoltLab.
To configure DoltLab to use these images without egress access, first, download the zip file containing the service images and upload them onto your DoltLab host.
Next, upload the corresponding DoltLab zip file to your DoltLab host as well. Both should be present on the host before continuing.
First, unzip the DoltLab zip folder to a directory called doltlab
, and cd
into the directory.
After you've generated the static assets, it's time to load the service images. cd
into the directory with the service images zip file. Unzip this file to a directory called service-images
.
This will load the required service images into Docker and they are immediately ready for DoltLab to use. Be sure to load ALL images contained within service-images
, as failing to do so will cause DoltLab to not work correctly.
You can now return to the doltlab
directory and start your DoltLab instance.
To do so, using the default user admin
account on your DoltLab instance, navigate to Profile > Settings > Reset user passwords.
After completing the form, the selected user will be able to login with their new password.
Starting with DoltLab >= v2.3.7, resetting a user's password will also reset the user's password attempts. If you're instance is < v2.3.7, follow the steps in the next section to reset the user's password attempts also, if this too is required.
In the event a user has exceeded the maximumum number of password attempts, 3, the DoltLab admin can reset the user's password attempts by using the doltlabdb/shell-db.sh
script.
Once connected to the database, run the following SQL statements to reset the user's password attempts:
This will reset the user's password attempts to 0 and allow them to attempt to login again.
If you started your DoltLab instance, but the UI is displaying an error, it is often the case that doltlabapi
has crashed during startup. To verify that this, check the logs for doltlabapi
by running:.
If you see an error similar to could not open database connection
, this means that doltlabapi
was unable to connect to the application database doltlabdb
.
To troubleshoot this issue, check that the doltlabdb
container is running by running:
If you do not see the doltlabdb
container running, then it too has crashed on startup. Investigate the logs for the doltlabdb
container by running:
If the doltlabdb
container is running, it is likely that case that the current database connection credentials used by doltlabapi
are not the same as those used by the doltlabdb
container.
It is important to remember that the first time a DoltLab instance is started, the doltlabdb
container is initialized with admin_password
and dolthubapi_password
values from the installer_config.yaml
file. These values are persisted to disk, and will be expected for all successful database connections.
If you have changed the database passwords in installer_config.yaml
after the initial startup, you will need either:
Delete the persistent Docker volume storing the doltlabdb
container's data. To do this, make sure your DoltLab instance is stopped, then run:
After deleting the volume, start your DoltLab instance again. DoltLab will recreate the volume and it will be initialized with the new credentials you provided in installer_config.yaml
.
DoltLab's source code is currently closed, but you can file DoltLab issues the . Release notes are available .
DoltLab currently persists all data to local disk using Docker volumes. To backup or restore DoltLab's data, we recommend the following steps which follow Docker's official .
Next, using the start.sh
script. After the script completes, stop DoltLab once more with ./stop.sh
. Doing this will recreate the required containers so that their volumes can be updated with the commands below.
The quickest way to do this is with the ./doltlabdb/shell-db.sh
script generated by the :
Then, use the ./doltlabdb/dolt_db_cli.sh
generated by the . This script will open a container shell with access to the Dolt server volumes.
Doing this removes the existing Dolt server database. Now, use to restore the database from the backup located at /backups/dolthubapi/2023/06/01
:
If you need to connect to a DoltLab team member, the best way to do so is on , in the #doltlab
server.
DoltLab is composed of running in a single Docker network via Docker compose. Docker writes the logs of each DoltLab service to an internal location. Logs for a particular service can be viewed using the docker logs <container name>
command. For example, to view to logs of doltlabapi
service, run:
Third party errors from golang SDK in DoltLab Enterprise.
The event name of database errors, i.e. dbr.begin.error
.
To authenticate a Dolt client against a DoltLab remote, use the --auth-endpoint
, --login-url
, and --insecure
arguments with the command.
--auth-endpoint
, should point at the running on port 50051
.--login-url
, should point at the DoltLab instance's credentials page.--insecure
, a boolean flag, should be used since if DoltLab is not running with TLS.
To authenticate without using the dolt login
command, first run the command, which will output a new public key:
Copy the generated public key and run the command:
service metrics for , doltlabremoteapi
, and , doltlabapi
, are published on port 7770
.
You can view the doltlabremoteapi
service metrics for our DoltLab demo instance here, and you can view the doltlabapi
service metrics here .
We recommend monitoring DoltLab with , which will expose container resource and performance metrics to Prometheus. Before running cAdvisor
, open port 8080
on your DoltLab host as well. cAdvisor
will display DoltLab's running containers via a web UI on :8080/docker
and will publish Prometheus metrics for DoltLab's container at :8080/metrics
by default.
To only permit whitelisted emails to create accounts on your DoltLab instance, edit ./installer_config.yaml
to .
Save these changes, then rerun the to regenerate DoltLab assets that will require explicit whitelisting of new user accounts.
Alternatively, run the with --whitelist-all-users=false
, which disables automatically whitelisting all users.
Next, once you've restarted you DoltLab instance with the regenerated assets, to whitelist an email for account creation in your instance, you will need to insert their email address into the email_whitelist_elements
table.
To do this run the script generated by the , called ./doltlabdb/shell-db.sh
.
Next, stop your DoltLab instance if it is running. Then, supply the --doltlabdb-host=<external db host>
and --doltlabdb-port=<external db port>
arguments to the .
Metrics can be disabled by setting the field of the ./installer_config.yaml
:
Save these changes, then rerun the to regenerate DoltLab assets that disable usage metrics.
Alternatively, to disable first-party metrics using command line arguments, run the with --disable-usage-metrics=true
.
It's common practice to provision a domain name to use for a DoltLab instance. To do so, secure a domain name and map it to the stable, public IP address of the DoltLab host. Then, supply the domain name as the value to the --host
argument used with the .
First, we should attach a stable IP to this host. To do this in AWS, we can provision an .
Next, we should associate the EIP with our DoltLab host by following . Once this is done, the DoltLab host should be reachable by the EIP.
Finally, we can provision a domain name for the DoltLab host through . After registering the new domain name, we need to create an A
record that's attached to the EIP of the DoltLab host. To do so, follow the steps for creating records outlined .
Save these changes, the rerun the to regenerate DoltLab assets the use your new domain name.
Alternatively, if you want to use command line flags instead, rerun the with --host=yourdomain.com
.
DoltLab can be configured to use a instance as its application database. This allows DoltLab administrators to use the feature-rich SQL workbench Hosted Dolt provides to interact with their DoltLab database.
To begin, you'll need to create a Hosted Dolt deployment that your DoltLab instance will connect to. We've created a for how to create your first Hosted Dolt deployment, but briefly, you'll need to create an account on and then click the "Create Deployment" button.
If you want the ability to , check the box "Enable Dolt Credentials". And finally, if you want to use the SQL workbench feature for this hosted instance (which we recommend) you should also check the box "Create database users for the SQL Workbench".
Save these changes and rerun the to regenerate DoltLab assets that will use your hosted instance as DoltLab's application database.
Alternatively, rerun the with --doltlabdb-host
referring to the host name of the Hosted Dolt instance, --doltlabdb-port
referring to the port, and --doltlabdb-tls-skip-verify=true
if you'd prefer to use command line arguments.
Start DoltLab using the ./start.sh
script generated by the . Once DoltLab is running successfully against my-doltlab-db-1
, you can create a database on DoltLab, for example called test-db
, and you will see live changes to the database reflected in the Hosted Dolt workbench:
Save these changes and rerun the to regenerate DoltLab assets that will limit the Job concurrency of your instance.
Alternatively, you can use the following command line arguments with the to prevent job resource exhaustion:
The following section describes how to setup an for a DoltLab instance.
First, setup DoltLab on an in the same where your NLB will run.
Next, in AWS, create for each DoltLab port that the NLB will forward requests to. These ports are:80
, 100
, 4321
, and 50051
.
Restart your DoltLab instance supplying this DNS name as the --host
to the , and your DoltLab instance will now be ready to run exclusively through the NLB.
Starting with DoltLab >= v2.3.3
, DoltLab's service images are available and do not need to be pulled from their AWS ECR repositories. This is useful for when egress traffic is restricted on the DoltLab host.
Next, edit with your desired configuration, or reuse a previous installer_config.yaml
. If you are reusing an older config file, just update the version
field at the top of the file to match the DoltLab version you're installing. Also, ensure any older DoltLab running on the host is shutdown by running the ./stop.sh
script.
Once your config file is updated, run the binary to generate DoltLab's static assets.
You will see tar files for each service DoltLab depends on. Load each of these tar files into Docker by using the command.
As of , if a user of your DoltLab instance has forgotten their password, the DoltLab admin must reset that user's password on their behalf.
Connect to the doltlabdb
.
For all other issues not covered in this section, please reach out to our support team on .