Public | Automated Build

Last pushed: 2 hours ago
Short Description
License Service
Full Description

This is the mega repo for future SaaS code (both services and library), to enhance sharing
and code reuse. (See team discussion thread.)

Setup and other conventions

Since this repo has a bunch of Go code, the easiest way is to clone it directly
under $GOPATH/github.com/docker/saas-mega. Due to Go workspace and vendoring
dependency, you may run into issues with protobuf reflection if you put it (or
symlink it) elsewhere.

Directory layout

/services                   (all services)
    /<service-name>
/schema                     (common communication & RPC schema)
    /<protobuf>             (protobuf schema)
    /<gen-go>               (generated Go code)
    /<gen-py>               (generated Python code)

Commit Guidelines

As the saas-mega repository is used by multiple teams, we enforce commit style rules to help us
manage the repository.

Commit Message Format

Each commit message consists of a header and a body. The header has a special
format that includes one or more components and a summary:

[<component>,<component>] <summary>
<BLANK LINE>
<body>

Any line of the commit message cannot be longer 80 characters! This helps with usage of Git tools and
GitHub.

Components

The services/components that the commit is targeting.

Examples:

  • accounts
  • repos
  • billing
  • tutum-app

Summary

The summary contains a succinct description of the change:

  • Capitalized, short (50 chars or less) summary
  • Use the imperative, present tense: "change" not "changed" nor "changes"
  • No dot (.) at the end

Body

Use the body to explain what and why vs. how.

Just as in the summary, use the imperative, present tense: "change" not "changed" nor "changes".
The body should include the motivation for the change and contrast this with previous behavior.

Wrap your commit messages at 80 chars.

The body is also the place to reference JIRA or GitHub issues.

Reference:

http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html
http://chris.beams.io/posts/git-commit/

Pull Requests

Use Github's "Squash and Merge" option when merging a PR, so that the change goes in as a single commit.

Service convention

A service should follow this interface:

  1. It should use make as the build interface, simply because make is universally available.
    Build output should go into a build directory as much as possible, to facilitate cleaning.
    The Makefile should have at least the following targets:

    • test: Run tests, using whatever means is suitable for the service. It may choose to build
      an image and run tests in the image, or just run tests locally.
    • image: Build a Docker image.
  2. The built image should have a run executable/script, which is the entry point of the image.
    One good pattern is for the run command to be a script that prepares the command for the
    actual service from environment variables. The actual service executable should be explicit
    about what are the possible configuration options.

Service Versioning

We use bumpversion to version services.

Service versioning is performed against the entire repository. That is, when one cuts a new version, all services are versioned appropriately. Instead of using a <major>.<minor>.<revision> convention, we simply rely on a <major>.0.0 convention, where major is an integer.

To version services, run the following command:

bumpversion major

This will create a commit on master and a v<new-version> tag for the version. You will need to push the tag and the master commit.

Third-party dependency (Golang)

We enable GO15VENDOREXPERIMENT, and put the vendor directory at the root of
the repo. By default, all services share the same third-party dependencies, at
the same version. (Exceptions can be made.)

Adding dependency

We use govendor as our Go dependency management tool. It outputs a manifest file at vendor/vendor.json.

To add a new dependency:

  1. Download package directly into vendor. Note that this also flattens the dependencies of the package and download them into vendor as well.
    govendor fetch github.com/a/b

    -- or --

  2. Copy all packages in $GOPATH that are referenced but has yet to be vendored.
    govendor add +e

Services

  • accounts - Formerly docker-io
    • need access to docker/accounts
  • billing - formerly billing-api. billing provides the soon to be EOLed billing V3 API
    • need access to docker/billing
  • billing-api - The billing service API server.
  • cloud-gateway - directory for storing data needed for running cloud-gateway
    • need access to docker/cloud-gateway
  • dhe-license-server - directory for storing data needed for running the licence server
    • need access to dockerhubenterprise/license-server
  • dnsmasq - Service for providing local DNS docker/dnsmasq
    • need access to docker/dnsmasq
  • dynamodb_local - service for providing a local instance of dynamodb
    • need access to which repo is this?
  • elasticsearch - directory for persisting elastic search data
  • haproxy - service for providing local dns routing
    • need access to docker/haproxy
  • hub-garant - service for providing auth tokens
    • need access to docker/hub-garant
  • hub-gateway - directory for storing data needed for running hub-gateway
    • need access to docker/hub-gateway
  • hydra - service for aggregating metrics
    • need access to docker/hydra
  • kafka2s3 - @bc what's this for?
    • need access to @bc which repo is this?
  • kafka_console_consumer - what's this for?
    • need access to @bc which repo is this?
  • pgbouncer - PG Bouncer for services requiring postgres connection pooling
    • need access to docker/repos-pgbouncer and docker/accounts-pgbouncer
  • postgres - Local instance of a postgres database (Docker-afied)
    • need access to docker/postgres
  • registry2kafka - what's this for?
    • need access to docker/registry2kafka
  • registry_v2 - directory for storing data for running registry v2
    • moved to docker/hub-distribution
  • repos - Formerly docker-index
    • need access to docker/repos
  • tutum-app - Docker cloud (formerly tutum)
    • need access to tutum/tutum-app
  • tutum-live - service for providing something for tutum
    • need access to tutum/tutum-live

Additional repos you'll neeed access to:

  • docker/nautilus
  • docker/cloud-ui
  • docker/distribution

Developing locally

Since there are many services, it is generally advised to configure docker daemon with 4 GB RAM and 4 cores to run all these services on the laptop.

Bootstrap your environment

To bootstrap your environment (Create databases, load initial data, etc) run the bootstrap.py script.

./bootstrap.py defaults to help. Any questions on how to use the script should be self documented.

./bootstrap.py all would bring up all services in docker-compose.yml and pull all images in the process.

Since running all services is pretty heavy, often you'd only need to run a subset of them.

Hub UI development

For example, if your end goal is to get just https://hub.dev.docker.com up and running, it's sufficient to do
docker-compose up -d --force-recreate dns {repos,billing,accounts}_{api,worker} id hub2 hubgw
then ./bootstrap.py repos billing accounts dns

  • In docker-compose.yml, notice how the hub2 service, also known as Hub UI, uses the docker/hub-local:latest image. Everytime you make changes to the hub2-demo repo which should be reflected at hub.dev.docker.com, run make local from within hub2-demo to update docker/hub-local:latest. From within saas-mega run docker-compose up -d hub2.

Troubleshooting Hub UI

  • A 5xx in the browser means the container is not up. You can verify that by looking at https://haproxy.dev.docker.com
    • If the container is in 'Exit' status, try doing docker-compose log CONTAINER_NAME
  • For debugging issues in hub2-demo, check docker-compose log hub2

Refer to the script and docker-compose.yml to figure out how to bring up the subset of services you need.

Signup and Login

If you're not interested
in testing the sign up flow, feel free to skip the following steps and just login with the default admin user created as part of
./bootstrap accounts above

Once your environment has been successfully bootstrapped now it's time to sign up and login.

  1. go to https://hub.dev.docker.com or https://cloud.dev.docker.com and click sign up.
  2. You will receive an email click the activation link
  3. You will be redirected back to either hub or cloud and prompted to login.

Next from the command line run:

$ docker login registry.dev.docker.com   # for the V2 registry.
$ docker login index.dev.docker.com      # for the V1 registry, depricated

you will be prompted for your username password and email.

Create and Push a Repo

If you're not interested in testing registry pull and pushes, feel free to skip these steps

Now lets create a repo in Docker Cloud. Lets call it alpine

Once the repo has been created in Docker Cloud. Now let's create the repo on your host machine, and push it to the dev registry.

$ mkdir alpine && cd alpine
$ touch Dockerfile

Make the contents of your dockerfile

FROM alpine:latest

Now lets build our image:

$ docker build -t registry.dev.docker.com/username/alpine:latest .   # where username is the username you signup with
$ docker push registry.dev.docker.com/username/alpine:latest

Deploying

Prerequisite

  1. Make sure that ucpdeploy is installed and setup. Obtain CLI-based access by downloading a client certificate bundle.

  2. VPN:
    See the company wiki on how to get VPN access to the staging or production
    environment -or- talk to the Infra Team.

  3. pass:
    Note that you may want to use gpg-agent. (If not setup, follow the Pass
    Runbook
    .
    Make sure your .password-store repo is up to date.

  4. Access to UCP: Services run on UCP clusters (one for stage and one
    production). Each cluster is accessible at its corresponding VPN. Production is
    https://ucp.us-east-1.aws.dckr.io and staging is
    https://ucp.stage-us-east-1.aws.dckr.io. Please ask infra for an account on both
    of these clusters and read up
    https://docker.atlassian.net/wiki/spaces/CNT/pages/135614556/Managing+Services+on+UCP
    to understand how the services are setup. The README of ucpdeploy in step 1 also has information about the services on UCP.

Steps

  1. Do a build
# You may need to be in hubboss's virtualenv to get `bumpversion`.
$ cd saas-mega
$ git checkout master
$ bumpversion major
$ git push origin master
$ git push origin <the-new-tag-name>

Wait for autobuild completion on: https://cloud.docker.com/app/docker/repository/docker/docker/your_repo_name/tags

  1. Login to docker hub as dockersaasdeploy user whose credentials are located in lastpass. Below command should do the trick (install lastpass cli if you haven't already done so):

    $ lpass show --password dockersaasdeploy | docker login -u dockersaasdeploy --password-stdin
    

    Make sure to docker login to your user after deployment.

  2. Activate the virtualenv for deployment and source your shell env to speak to UCP staging or production.

    $ source ~/deploy/bin/activate`
    

    Navigate to the directory where your UCP user bundle is stored and source the env.sh script:

    $ eval $(<env.sh)
    
  3. Deploy to staging/production

Update image tags in respective docker-compose-prod.yml or docker-compose.stage.yml files.

Note: Some services like repos and accounts run a predeploy step by using docker run command against UCP - if so update predeploy.sh. This command unfortunately doesn't use credentials from docker login and requires credentials in ~/.docker/config.json. Please see https://docs.docker.com/swarm/swarm-api/#registry-authentication for setting it up.

$ ucpdeploy <service name>

Troubleshooting

Domain not reachable

If running Docker for Mac or Docker for Windows and attempting to access *.dev.docker.com results in a failure. This is likely because the IP address of the VM running the docker engine has changed. (this can happen when rebooting your computer)

run: ./bootstrap.py dns

This will remove the existing DNS container rewrite your dns file with the new IP address and create a new DNS container with the new IP address.

Services too slow

If the services are running but not responding and seem extremely sluggish, it's probably due to not enough resources available to them. Try increasing the memory and CPU used by docker and you may notice the difference.

Docker Pull Command
Owner
docker
Source Repository