1 Things to Worry About 2 Defense in Depth 5 Segregate Containers by Host 6 Applying Updates 8 Image Provenance 12 Security Tips 21 Run a Hardened Kernel 34 Linux Security Modules 35 Aud
Trang 2Short Smart
Seriously useful.
Free ebooks and reports from O’Reilly
A New Excerpt from
High Performance Browser Networking
Adrian Mouat
Using Containers Safely in Production
Scheduling the Future at Cloud Scale Kubernetes
David K Rensin
DevOps for Finance
Jim Bird Reducing Risk Through Continuous Delivery
Get even more insights from industry experts
and stay current with the latest developments in
web operations, DevOps, and web performance
with free ebooks and reports from O’Reilly.
Trang 3Adrian Mouat
Docker Security
Using Containers Safely in Production
Trang 4[LSI]
Docker Security
by Adrian Mouat
Copyright © 2015 O’Reilly Media All rights reserved.
Printed in the United States of America.
Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472.
O’Reilly books may be purchased for educational, business, or sales promotional use Online editions are also available for most titles (http://safaribooksonline.com) For more information, contact our corporate/institutional sales department: 800-998-9938 or corporate@oreilly.com.
August 2015: First Edition
Revision History for the First Edition
2015-08-17: First Release
2016-01-29: Second Release
See http://oreilly.com/catalog/errata.csp?isbn=9781491936610 for release details.
The O’Reilly logo is a registered trademark of O’Reilly Media, Inc Docker Security,
the cover image, and related trade dress are trademarks of O’Reilly Media, Inc While the publisher and the author have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the author disclaim all responsibility for errors or omissions, including without limi‐ tation responsibility for damages resulting from the use of or reliance on this work Use of the information and instructions contained in this work is at your own risk If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsi‐ bility to ensure that your use thereof complies with such licenses and/or rights.
Trang 5Table of Contents
Foreword ix
1 Security and Limiting Containers 1
Things to Worry About 2
Defense in Depth 5
Segregate Containers by Host 6
Applying Updates 8
Image Provenance 12
Security Tips 21
Run a Hardened Kernel 34
Linux Security Modules 35
Auditing 40
Incident Response 41
Conclusion 41
vii
Trang 7Docker’s introduction of the standardized image format has fueled
an explosion of interest in the use of containers in the enterprise.Containers simplify the distribution of software and allow greatersharing of resources on a computer system But as you pack moreapplications onto a system, the risk of an individual application hav‐ing a vulnerability leading to a breakout increases
Containers, as opposed to virtual machines, currently share thesame host kernel This kernel is a single point of failure A flaw inthe host kernel could allow a process within a container to break outand take over the system Docker security is about limiting and con‐trolling the attack surface on the kernel Docker security takesadvantage of security measures provided by the host operating sys‐tem It relies on Defense in Depth, using multiple security measures
to control what the processes within the container are able to do AsDocker/containers evolve, security measures will continue to beadded
Administrators of container systems have a lot of responsibility tocontinue to use the common sense security measures that they havelearned on Linux and UNIX systems over the years They should notjust rely on whether the “containers actually contain.”
• Only run container images from trusted parties
• Container applications should drop privileges or run withoutprivileges whenever possible
• Make sure the kernel is always updated with the latest securityfixes; the security kernel is critical
• Make sure you have support teams watching for security flaws
in the kernel
Trang 8• Use a good quality supported host system for running the con‐tainers, with regular security updates.
• Do not disable security features of the host operating system
• Examine your container images for security flaws and makesure the provider fixes them in a timely manner
—Dan Walsh Consulting Engineer, Red Hat
Trang 91 I strongly recommend Dan Walsh’s series of posts at opensource.com
CHAPTER 1
Security and Limiting Containers
To use Docker safely, you need to be aware of the potential securityissues and the major tools and techniques for securing container-based systems This report considers security mainly from the view‐point of running Docker in production, but most of the advice isequally applicable to development Even with security, it is impor‐tant to keep the development and production environments similar
in order to avoid the issues around moving code between environ‐ments that Docker was intended to solve
Reading online posts and news items1 about Docker can give youthe impression that Docker is inherently insecure and not ready forproduction use While you certainly need to be aware of issuesrelated to using containers safely, containers, if used properly, canprovide a more secure and efficient system than using virtualmachines (VMs) or bare metal alone
This report begins by exploring some of the issues surrounding thesecurity of container-based systems that you should be thinkingabout when using containers
1
Trang 10The guidance and advice in this report is based on my
opinion I am not a security researcher, nor am I
responsible for any major public-facing system That
being said, I am confident that any system that follows
the guidance in this report will be in a better security
situation than the majority of systems out there The
advice in this report does not form a complete solution
and should be used only to inform the development of
your own security procedures and policy
Things to Worry About
So what sorts of security issues should you be thinking about in acontainer-based environment? The following list is not comprehen‐sive, but should give you food for thought:
Kernel exploits
Unlike in a VM, the kernel is shared among all containers andthe host, magnifying the importance of any vulnerabilitiespresent in the kernel Should a container cause a kernel panic, itwill take down the whole host In VMs, the situation is muchbetter: an attacker would have to route an attack through boththe VM kernel and the hypervisor before being able to touch thehost kernel
Denial-of-service attacks
All containers share kernel resources If one container canmonopolize access to certain resources—including memory andmore esoteric resources such as user IDs (UIDs)—it can starveout other containers on the host, resulting in a denial-of-service(DoS), whereby legitimate users are unable to access part or all
of the system
Container breakouts
An attacker who gains access to a container should not be able
to gain access to other containers or the host By default, usersare not namespaced, so any process that breaks out of the con‐tainer will have the same privileges on the host as it did in thecontainer; if you were root in the container, you will be root on
Trang 112 It is possible to turn on user namespacing, which will map the root user in a container
to a high-numbered user on the host We will discuss this feature and its drawbacks later.
the host.2 This also means that you need to worry about poten‐
tial privilege escalation attacks—whereby a user gains elevated
privileges such as those of the root user, often through a bug inapplication code that needs to run with extra privileges Giventhat container technology is still in its infancy, you should orga‐nize your security around the assumption that container break‐outs are unlikely, but possible
Poisoned images
How do you know that the images you are using are safe,haven’t been tampered with, and come from where they claim tocome from? If an attacker can trick you into running his image,both the host and your data are at risk Similarly, you want to besure that the images you are running are up-to-date and do notcontain versions of software with known vulnerabilities
how to address this, but see the Deployment chapter of Using
Docker (O’Reilly, 2015) for how to handle secrets in Docker.
Things to Worry About | 3
Trang 12Containers and Namespacing
In a much-cited article, Dan Walsh of Red Hat wrote, “Containers
Do Not Contain.” By this, he primarily meant that not all resources
that a container has access to are namespaced Resources that are
namespaced are mapped to a separate value on the host; for exam‐ple, PID 1 inside a container is not PID 1 on the host or in anyother container By contrast, resources that are not namespaced arethe same on the host and in containers
Resources that are not namespaced include the following:
UIDs (by default)
If a user is root inside a container and breaks out of the con‐tainer, that user will be root on the host An initial version ofuser namespacing is included in Docker 1.10, but is notenabled by default
The kernel keyring
If your application or a dependent application uses the kernelkeyring for handling cryptographic keys or something similar,
it’s very important to be aware of this Keys are separated by
UID, meaning any container running with a user of the sameUID will have access to the same keys
The kernel itself and any kernel modules
If a container loads a kernel module (which requires extra priv‐ileges), the module will be available across all containers andthe host This includes the Linux Security Modules discussedlater
Devices
Including disk drives, sound-cards, and graphics processingunits (GPUs)
The system time
Changing the time inside a container changes the system timefor the host and all other containers This is possible only incontainers that have been given the SYS_TIME capability, which
is not granted by default
The simple fact is that both Docker and the underlying Linux kernelfeatures it relies on are still young and nowhere near as battle-hardened as the equivalent VM technology For the time being at
Trang 133 An interesting argument exists about whether containers will ever be as secure as VMs.
VM proponents argue that the lack of a hypervisor and the need to share kernel resour‐ ces mean that containers will always be less secure Container proponents argue that VMs are more vulnerable because of their greater attack surface, pointing to the large amounts of complicated and privileged code in VMs required for emulating esoteric hardware (as an example, see the recent VENOM vulnerability that exploited code in floppy drive emulation).
4 The concept of least privilege was first articulated as “Every program and every privi‐ leged user of the system should operate using the least amount of privilege necessary to complete the job,” by Jerome Saltzer in “Protection and the Control of Information Sharing in Multics.” Recently, Diogo Mónica and Nathan McCauley from Docker have been championing the idea of “least-privilege microservices” based on Saltzer’s principle., including at a recent DockerCon talk
least, do not consider containers to offer the same level of securityguarantees as VMs.3
Defense in Depth
So what can you do? Assume vulnerability and build defense indepth Consider the analogy of a castle, which has multiple layers ofdefense, often tailored to thwart various kinds of attacks Typically, acastle has a moat, or exploits local geography, to control accessroutes to the castle The walls are thick stone, designed to repel fireand cannon blasts There are battlements for defenders and multiplelevels of keeps inside the castle walls Should an attacker get past oneset of defenses, there will be another to face
The defenses for your system should also consist of multiple layers.For example, your containers will most likely run in VMs so that if acontainer breakout occurs, another level of defense can prevent theattacker from getting to the host or other containers Monitoringsystems should be in place to alert admins in the case of unusualbehavior Firewalls should restrict network access to containers, lim‐iting the external attack surface
Least Privilege
Another important principle to adhere to is least privilege: each pro‐
cess and container should run with the minimum set of access rightsand resources it needs to perform its function.4 The main benefit ofthis approach is that if one container is compromised, the attackershould still be severely limited in being able to perform actions thatprovide access to or exploit further data or resources
Defense in Depth | 5
Trang 14In regards to least privilege, you can take many steps to reduce thecapabilities of containers:
• Ensure that processes in containers do not run as root, so thatexploiting a vulnerability present in a process does not give theattacker root access
• Run filesystems as read-only so that attackers cannot overwritedata or save malicious scripts to file
• Cut down on the kernel calls that a container can make toreduce the potential attack surface
• Limit the resources that a container can use to avoid DoSattacks whereby a compromised container or application con‐sumes enough resources (such as memory or CPU) to bring thehost to a halt
Docker Privileges = Root Privileges
This report focuses on the security of running contain‐
ers, but it is important to point out that you also have
to be careful about who you give access to the Docker
daemon Any user who can start and run Docker con‐
tainers effectively has root access to the host For
example, consider that you can run the following:
$ docker run -v /:/homeroot -it debian bash
And you can now access any file or binary on the host
machine
If you run remote API access to your Docker daemon,
be careful about how you secure it and who you give
access to If possible, restrict access to the local net‐
work
Segregate Containers by Host
If you have a multi-tenancy setup, running containers for multipleusers (whether these are internal users in your organization orexternal customers), ensure that each user is placed on a separateDocker host, as shown in Figure 1-1 This is less efficient than shar‐ing hosts between users and will result in a higher number of VMsand/or machines than reusing hosts, but is important for security
Trang 15The main reason is to prevent container breakouts resulting in auser gaining access to another user’s containers or data If a con‐tainer breakout occurs, the attacker will still be on a separate VM ormachine and unable to easily access containers belonging to otherusers.
Figure 1-1 Segregating containers by host
Similarly, if you have containers that process or store sensitiveinformation, keep them on a host separate from containers handlingless-sensitive information and, in particular, away from containersrunning applications directly exposed to end users For example,containers processing credit-card details should be kept separatefrom containers running the Node.js frontend
Segregation and use of VMs can also provide added protectionagainst DoS attacks; users won’t be able to monopolize all the mem‐ory on the host and starve out other users if they are containedwithin their own VM
In the short to medium term, the vast majority of container deploy‐ments will involve VMs Although this isn’t an ideal situation, itdoes mean you can combine the efficiency of containers with thesecurity of VMs
Segregate Containers by Host | 7
Trang 165 A work-around is to docker save all the required images and load them into a fresh registry.
Applying Updates
The ability to quickly apply updates to a running system is critical tomaintaining security, especially when vulnerabilities are disclosed incommon utilities and frameworks
The process of updating a containerized system roughly involves thefollowing stages:
1 Identify images that require updating This includes both baseimages and any dependent images See “Getting a List of Run‐
up-5 Restart the containers on each Docker host
6 Once you’ve ascertained that everything is functioning cor‐rectly, remove the old images from the hosts If you can, alsoremove them from your registry
Some of these steps sound easier than they are Identifying imagesthat need updating may require some grunt work and shell fu.Restarting the containers assumes that you have in place some sort
of support for rolling updates or are willing to tolerate downtime Atthe time of writing, functionality to completely remove images from
a registry and reclaim the disk space is still being worked on.5
If you use Docker Hub to build your images, note that you can set
up repository links, which will kick off a build of your image when
Trang 17any linked image changes By setting a link to the base image, yourimage will automatically get rebuilt if the base image changes.
Getting a List of Running Images
The following gets the image IDs for all running images:
$ docker inspect -f "{{.Image}}" $(docker ps -q)
42a3cf88f3f0cce2b4bfb2ed714eec5ee937525b4c7e0a0f70daff18c 41b730702607edf9b07c6098f0b704ff59c5d4361245e468c0d551f50
You can use a little more shell fu to get some more information:
$ docker images no-trunc | grep \
$(docker inspect -f "-e {{.Image}}" $(docker ps -q)) nginx latest 42a3cf88f 2 weeks ago 132.8 MB
debian latest 41b730702 2 weeks ago 125.1 MB
To get a list of all images and their base or intermediate images (use
no-trunc for full IDs):
$ docker inspect -f "{{.Image}}" $(docker ps -q) | \
xargs -L 1 docker history -q
And you can extend this again to get information on the images:
$ docker images | grep \
$(docker inspect -f "{{.Image}}" $(docker ps -q) | \ xargs -L 1 docker history -q | sed "s/^/\-e /") nginx latest 42a3cf88f3f0 2 weeks ago 132.8 MB
debian latest 41b730702607 2 weeks ago 125.1 MB
If you want to get details on the intermediate images as well asnamed images, add the -a argument to the docker images com‐mand Note that this command includes a significant gotcha: ifyour host doesn’t have a tagged version of a base image, it won’tshow up in the list For example, the official Redis image is based
Applying Updates | 9
Trang 186This is similar to modern ideas of immutable infrastructure, whereby infrastructure—
including bare metal, VMs, and containers—is never modified and is instead replaced when a change is required.
on debian:wheezy, but the base image will appear as <None> in
docker images -a unless the host has separately and explicitlypulled the debian:wheezy image (and it is exactly the same version
of that image)
When you need to patch a vulnerability found in a third-partyimage, including the official images, you are dependent on thatparty providing a timely update In the past, providers have beencriticized for being slow to respond In such a situation, you caneither wait or prepare your own image Assuming that you haveaccess to the Dockerfile and source for the image, rolling your imagemay be a simple and effective temporary solution
This approach should be contrasted with the typical VM approach
of using configuration management (CM) software such as Puppet,Chef, or Ansible In the CM approach, VMs aren’t re-created but areupdated and patched as needed, either through SSH commands or
an agent installed in the VM This approach works, but means thatseparate VMs are often in different states and that significant com‐plexity exists in tracking and updating the VMs This is necessary toavoid the overhead of re-creating VMs and maintaining a master, or
golden, image for the service The CM approach can be taken with
containers as well, but adds significant complexity for no benefit—the simpler golden image approach works well with containersbecause of the speed at which containers can be started and the ease
of building and maintaining images.6
Trang 19Label Your Images
Identifying images and what they contain can be made
a lot easier by liberal use of labels when building
images This feature appeared in 1.6 and allows the
image creator to associate arbitrary key/value pairs
with an image This can be done in the Dockerfile:
FROM debian
LABEL version 1.0
LABEL description "Test image for labels"
You can take things further and add data such as the
Git hash that the code in the image was compiled
from, but this requires using some form of templating
tool to automatically update the value
Labels can also be added to a container at runtime:
$ docker run -d name label-test -l group=a \
This can be useful when you want to handle certain
events at runtime, such as dynamically allocating con‐
tainers to load-balancer groups
At times, you will need to update the Docker daemon to gain access
to new features, security patches, or bug fixes This will force you toeither migrate all containers to a new host or temporarily halt themwhile the update is applied It is recommended that you subscribe toeither the docker-user or docker-dev Google groups to receive noti‐fications of important updates
Avoid Unsupported Drivers
Despite its youth, Docker has already gone through several stages ofdevelopment, and some features have been deprecated or areunmaintained Relying on such features is a security risk, becausethey will not be receiving the same attention and updates as otherparts of Docker The same goes for drivers and extensions depended
on by Docker
Storage drivers are another major area of development and change
At the time of writing, Docker is moving away from AUFS as thepreferred storage driver The AUFS driver is being taken out of the
Applying Updates | 11
Trang 207 A full discussion of public-key cryptography is fascinating but out of scope here For
more information see Applied Cryptography by Bruce Schneier.
kernel and no longer developed Users of AUFS are encouraged tomove to Overlay or one of the other drivers in the near future
Image Provenance
To safely use images, you need to have guarantees about their prove‐
nance: where they came from and who created them You need to be
sure that you are getting exactly the same image that the originaldeveloper tested and that no one has tampered with it, either duringstorage or transit If you can’t verify this, the image may havebecome corrupted or, much worse, replaced with something mali‐cious Given the previously discussed security issues with Docker,this is a major concern; you should assume that a malicious imagehas full access to the host
Provenance is far from a new problem in computing The primary
tool in establishing the provenance of software or data is the secure
hash A secure hash is something like a fingerprint for data—it is a
(comparatively) small string that is unique to the given data Anychanges to the data will result in the hash changing Several algo‐rithms are available for calculating secure hashes, with varyingdegrees of complexity and guarantees of the uniqueness of the hash.The most common algorithms are SHA (which has several variants)and MD5 (which has fundamental problems and should be avoi‐ded) If you have a secure hash for some data and the data itself, youcan recalculate the hash for the data and compare it If the hashesmatch, you can be certain the data has not been corrupted or tam‐pered with However, one issue remains—why should you trust thehash? What’s to stop an attacker from modifying both the data and
the hash? The best answer to this is cryptographic signing and public/
private key pairs
Through cryptographic signing, you can verify the identify of the
publisher of an artifact If a publisher signs an artifact with their pri‐
vate key,7 any recipient of that artifact can verify it came from the
publisher by checking the signature using the publisher’s public key.
Assuming the client has already obtained a copy of the publisher’skey, and that publisher’s key has not been compromised, you can be
Trang 218 A similar construct is used in protocols such as Bittorrent and Bitcoin and is known as
Secure hashes are known as digests in Docker parlance A digest is a
SHA256 hash of a filesystem layer or manifest, where a manifest ismetadata file describing the constituent parts of a Docker image Asthe manifest contains a list of all the image’s layers identified bydigest,8 if you can verify that the manifest hasn’t been tamperedwith, you can safely download and trust all the layers, even overuntrustworthy channels (e.g., HTTP)
Docker Content Trust
Docker introduced content trust in 1.8 This is Docker’s mechanismfor allowing publishers9 to sign their content, completing the trusteddistribution mechanism When a user pulls an image from a reposi‐tory, she receives a certificate that includes the publisher’s publickey, allowing her to verify that the image came from the publisher.When content trust is enabled, the Docker engine will only operate
on images that have been signed and will refuse to run any imageswhose signatures or digests do not match
You can see content trust in action by enabling it and trying to pullsigned and unsigned images:
$ export DOCKER_CONTENT_TRUST=1
$ docker pull debian:wheezy
Pull (1 of 1): debian:wheezy@sha256:c584131da2ac1948aa3e66468a4424b6aea2f33a sha256:c584131da2ac1948aa3e66468a4424b6aea2f33acba7cec0b631bd 4c8cbfd2973e: Pull complete
60c52dbe9d91: Pull complete
Digest: sha256:c584131da2ac1948aa3e66468a4424b6aea2f
Status: Downloaded newer image for debian@sha256:c5841 Tagging debian@sha256:c584131da2ac1948aa3e66468a4424b6aea2f33
$ docker pull amouat/identidock:unsigned
No trust data for unsigned
Image Provenance | 13
Trang 22In Docker 1.8, content trust must be enabled by setting the envi‐ronment variable DOCKER_CONTENT_TRUST=1 In later versions ofDocker, this will become the default.
The official, signed, Debian image was pulled successfully In con‐trast, Docker refused to pull the unsigned image amouat/identidock:unsigned
So what about pushing signed images? It’s surprisingly easy:
$ docker push amouat/identidock:newest
The push refers to a repository [docker.io/amouat/identidock] (len: 1)
843e2bded498: Image already exists
newest: digest: sha256:1a0c4d72c5d52094fd246ec03d
Signing and pushing trust metadata
You are about to create a new root signing key passphrase This passphrase
will be used to protect the most sensitive key in your signing system Please
choose a long, complex passphrase and be careful to keep the password and the
key file itself secure and backed up It is highly recommended that you use a
password manager to generate the passphrase and keep it safe There will be no
way to recover this key You can find the key in your config directory.
Enter passphrase for new offline key with id 70878f1:
Repeat passphrase for new offline key with id 70878f1:
Enter passphrase for new tagging key with id docker.io/amouat/ identidock
Repeat passphrase for new tagging key with id docker.io/amouat/ identidock
Finished initializing "docker.io/amouat/identidock"
Since this was the first push to the repository with content trust
enabled, Docker has created a new root signing key and a tagging key.
The tagging key will be discussed later Note the importance of keep‐ing the root key safe and secure Life becomes very difficult if youlose this; all users of your repositories will be unable to pull newimages or update existing images without manually removing theold certificate
Now the image can be downloaded using content trust:
$ docker rmi amouat/identidock:newest
Untagged: amouat/identidock:newest
Trang 23$ docker pull amouat/identidock:newest
Pull (1 of 1): amouat/identidock:newest@sha256:1a0c4d72c sha256:1a0c4d72c5d52094fd246ec03d6b6ac43836440796
to connecting to a host via SSH for the first time; you have to trustthat you are being given the correct credentials Future pulls fromthat repository can be verified using the existing certificate
Back Up Your Signing Keys!
Docker will encrypt all keys at rest and takes care to
ensure private material is never written to disk Due to
the importance of the keys, it is recommended that
they are backed up on two encrypted USB sticks kept
in a secure location To create a TAR file with the keys,
Note that as the root key is only needed when creating
or revoking keys, it can—and should—be stored off‐
line when not in use
Back to the tagging key A tagging key is generated for each reposi‐tory owned by a publisher The tagging key is signed by the root key,which allows it to be verified by any user with the publisher’s certifi‐cate The tagging key can be shared within an organization and used
to sign any images for that repository After generating the taggingkey, the root key can and should be taken offline and stored securely.Should a tagging key become compromised, it is still possible torecover By rotating the tagging key, the compromised key can beremoved from the system This process happens invisibly to the user
Image Provenance | 15
Trang 24and can be done proactively to protect against undetected key com‐promises.
Content trust also provides freshness guarantees to guard against
replay attacks A replay attack occurs when an artifact is replaced
with a previously valid artifact For example, an attacker may replace
a binary with an older, known vulnerable version that was previ‐ously signed by the publisher As the binary is correctly signed, theuser can be tricked into running the vulnerable version of the
binary To avoid this, content trust makes use of timestamp keys
associated with each repository These keys are used to sign meta‐data associated with the repository The metadata has a short expira‐tion date that requires it to be frequently resigned by the timestampkey By verifying that the metadata has not expired before down‐loading the image, the Docker client can be sure it is receiving anup-to-date (or fresh) image The timestamp keys are managed bythe Docker Hub and do not require any interaction from the pub‐lisher
A repository can contain both signed and unsigned images If youhave content trust enabled and want to download an unsignedimage, use the disable-content-trust flag:
$ docker pull amouat/identidock:unsigned
No trust data for unsigned
$ docker pull disable-content-trust \
If you want to learn more about content trust, see the offical Dockerdocumentation, as well as The Update Framework, which is theunderlying specification used by content trust
While this is a reasonably complex infrastructure with multiple sets
of keys, Docker has worked hard to ensure it is still simple for endusers With content trust, Docker has developed a user-friendly,modern security framework providing provenance, freshness, andintegrity guarantees
Trang 25Content trust is currently enabled and working on the Docker Hub.
To set up content trust for a local registry, you will also need to con‐figure and deploy a Notary server
Notary
The Docker Notary project is a generic server-client framework forpublishing and accessing content in a trustworthy and secure man‐ner Notary is based on The Update Framework specification,which provides a secure design for distributing and updating con‐tent
Docker’s content trust framework is essentially an integration ofNotary with the Docker API By running both a registry and aNotary server, organizations can provide trusted images to users.However, Notary is designed to be standalone and usable in a widerange of scenarios
A major use case for Notary is to improve the security and trust‐worthiness of the common curl | sh approach, which is typified
by the current Docker installation instructions:
$ curl -sSL https://get.docker.com/ | sh
If such a download is compromised either on the server or intransit, the attacker will be able to run arbitrary commands on thevictim’s computer The use of HTTPS will stop the attacker frombeing able to modify data in transit, but they may still be able toprematurely end the download, thereby truncating the code in apotentially dangerous way The equivalent example of using Notarylooks something like this:
$ curl http://get.docker.com/ | notary verify docker.com/ scripts v1 | sh
The call to notary compares a checksum for the script with the
checksum in Notary’s trusted collection for docker.com If it passes, you have verified that the script does indeed come from docker.com
and has not been tampered with If it fails, Notary will bail out, and
no data will be passed to sh What’s also notable is that the scriptitself can be transferred over insecure channels—in this case, HTTP
—without worry; if the script is altered in transit, the checksum willchange and Notary will throw an error
Image Provenance | 17