diff --git a/components/engine/.mailmap b/components/engine/.mailmap index 83c18fa29c..1f38e55e28 100644 --- a/components/engine/.mailmap +++ b/components/engine/.mailmap @@ -2,7 +2,7 @@ -Guillaume J. Charmes creack +Guillaume J. Charmes @@ -16,4 +16,6 @@ Tim Terhorst Andy Smith + +Thatcher Peskens diff --git a/components/engine/AUTHORS b/components/engine/AUTHORS index e8979aac6b..e7c6834cf4 100644 --- a/components/engine/AUTHORS +++ b/components/engine/AUTHORS @@ -1,24 +1,34 @@ +Al Tobey +Alexey Shamrin Andrea Luzzardi Andy Rothfusz Andy Smith Antony Messerli +Barry Allard +Brandon Liu Brian McCallister +Bruno Bigras Caleb Spare Charles Hooper Daniel Mizyrycki Daniel Robinson +Daniel Von Fange Dominik Honnef Don Spaulding +Dr Nic Williams +Evan Wies ezbercih Flavio Castelli Francisco Souza Frederick F. Kautz IV Guillaume J. Charmes +Harley Laue Hunter Blanks Jeff Lindsay Jeremy Grosser Joffrey F John Costa +Jonas Pfenniger Jonathan Rudenberg Julien Barbier Jérôme Petazzoni @@ -27,8 +37,11 @@ Kevin J. Lynagh Louis Opter Maxim Treskin Mikhail Sobolev +Nate Jones Nelson Chen Niall O'Higgins +odk- +Paul Bowsher Paul Hammond Piotr Bogdan Robert Obryk @@ -38,6 +51,8 @@ Silas Sewell Solomon Hykes Sridhar Ratnakumar Thatcher Peskens +Thomas Bikeev +Tianon Gravi Tim Terhorst Troy Howard unclejack diff --git a/components/engine/README.md b/components/engine/README.md index b22c731691..c83feeae58 100644 --- a/components/engine/README.md +++ b/components/engine/README.md @@ -35,7 +35,7 @@ for containerization, including Linux with [openvz](http://openvz.org), [vserver Docker builds on top of these low-level primitives to offer developers a portable format and runtime environment that solves all 4 problems. Docker containers are small (and their transfer can be optimized with layers), they have basically zero memory and cpu overhead, -the are completely portable and are designed from the ground up with an application-centric design. +they are completely portable and are designed from the ground up with an application-centric design. The best part: because docker operates at the OS level, it can still be run inside a VM! @@ -46,7 +46,7 @@ Docker does not require that you buy into a particular programming language, fra Is your application a unix process? Does it use files, tcp connections, environment variables, standard unix streams and command-line arguments as inputs and outputs? Then docker can run it. -Can your application's build be expressed a sequence of such commands? Then docker can build it. +Can your application's build be expressed as a sequence of such commands? Then docker can build it. ## Escape dependency hell @@ -70,21 +70,21 @@ Docker solves dependency hell by giving the developer a simple way to express *a and streamline the process of assembling them. If this makes you think of [XKCD 927](http://xkcd.com/927/), don't worry. Docker doesn't *replace* your favorite packaging systems. It simply orchestrates their use in a simple and repeatable way. How does it do that? With layers. -Docker defines a build as running a sequence unix commands, one after the other, in the same container. Build commands modify the contents of the container +Docker defines a build as running a sequence of unix commands, one after the other, in the same container. Build commands modify the contents of the container (usually by installing new files on the filesystem), the next command modifies it some more, etc. Since each build command inherits the result of the previous commands, the *order* in which the commands are executed expresses *dependencies*. Here's a typical docker build process: ```bash -from ubuntu:12.10 -run apt-get update -run apt-get install python -run apt-get install python-pip -run pip install django -run apt-get install curl -run curl http://github.com/shykes/helloflask/helloflask/master.tar.gz | tar -zxv -run cd master && pip install -r requirements.txt +from ubuntu:12.10 +run apt-get update +run DEBIAN_FRONTEND=noninteractive apt-get install -q -y python +run DEBIAN_FRONTEND=noninteractive apt-get install -q -y python-pip +run pip install django +run DEBIAN_FRONTEND=noninteractive apt-get install -q -y curl +run curl -L https://github.com/shykes/helloflask/archive/master.tar.gz | tar -xzv +run cd helloflask-master && pip install -r requirements.txt ``` Note that Docker doesn't care *how* dependencies are built - as long as they can be built by running a unix command in a container. @@ -293,7 +293,7 @@ a format that is self-describing and portable, so that any compliant runtime can The spec for Standard Containers is currently a work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment. -A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery. +A great analogy for this is the shipping container. Just like how Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery. ### 1. STANDARD OPERATIONS @@ -321,7 +321,7 @@ Similarly, before Standard Containers, by the time a software component ran in p ### 5. INDUSTRIAL-GRADE DELIVERY -There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away. +There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded onto the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away. With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality. diff --git a/components/engine/Vagrantfile b/components/engine/Vagrantfile index 9ec0c83182..3d568266af 100644 --- a/components/engine/Vagrantfile +++ b/components/engine/Vagrantfile @@ -3,6 +3,8 @@ BOX_NAME = ENV['BOX_NAME'] || "ubuntu" BOX_URI = ENV['BOX_URI'] || "http://files.vagrantup.com/precise64.box" +AWS_REGION = ENV['AWS_REGION'] || "us-east-1" +AWS_AMI = ENV['AWS_AMI'] || "ami-d0f89fb9" Vagrant::Config.run do |config| # Setup virtual machine box. This VM configuration code is always executed. @@ -49,8 +51,8 @@ Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config| aws.keypair_name = ENV["AWS_KEYPAIR_NAME"] override.ssh.private_key_path = ENV["AWS_SSH_PRIVKEY"] override.ssh.username = "ubuntu" - aws.region = "us-east-1" - aws.ami = "ami-d0f89fb9" + aws.region = AWS_REGION + aws.ami = AWS_AMI aws.instance_type = "t1.micro" end diff --git a/components/engine/api.go b/components/engine/api.go index 2198a0963a..29103fac10 100644 --- a/components/engine/api.go +++ b/components/engine/api.go @@ -4,8 +4,8 @@ import ( "encoding/json" "fmt" "github.com/dotcloud/docker/auth" + "github.com/dotcloud/docker/utils" "github.com/gorilla/mux" - "github.com/shin-/cookiejar" "io" "log" "net/http" @@ -34,6 +34,8 @@ func parseForm(r *http.Request) error { func httpError(w http.ResponseWriter, err error) { if strings.HasPrefix(err.Error(), "No such") { http.Error(w, err.Error(), http.StatusNotFound) + } else if strings.HasPrefix(err.Error(), "Bad parameter") { + http.Error(w, err.Error(), http.StatusBadRequest) } else { http.Error(w, err.Error(), http.StatusInternalServerError) } @@ -44,12 +46,18 @@ func writeJson(w http.ResponseWriter, b []byte) { w.Write(b) } -func getAuth(srv *Server, w http.ResponseWriter, r *http.Request, vars map[string]string) error { - config := &auth.AuthConfig{ - Username: srv.runtime.authConfig.Username, - Email: srv.runtime.authConfig.Email, +func getBoolParam(value string) (bool, error) { + if value == "1" || strings.ToLower(value) == "true" { + return true, nil } - b, err := json.Marshal(config) + if value == "" || value == "0" || strings.ToLower(value) == "false" { + return false, nil + } + return false, fmt.Errorf("Bad parameter") +} + +func getAuth(srv *Server, w http.ResponseWriter, r *http.Request, vars map[string]string) error { + b, err := json.Marshal(srv.registry.GetAuthConfig()) if err != nil { return err } @@ -63,18 +71,17 @@ func postAuth(srv *Server, w http.ResponseWriter, r *http.Request, vars map[stri return err } - if config.Username == srv.runtime.authConfig.Username { - config.Password = srv.runtime.authConfig.Password + if config.Username == srv.registry.GetAuthConfig().Username { + config.Password = srv.registry.GetAuthConfig().Password } newAuthConfig := auth.NewAuthConfig(config.Username, config.Password, config.Email, srv.runtime.root) status, err := auth.Login(newAuthConfig) if err != nil { return err - } else { - srv.runtime.graph.getHttpClient().Jar = cookiejar.NewCookieJar() - srv.runtime.authConfig = newAuthConfig } + srv.registry.ResetClient(newAuthConfig) + if status != "" { b, err := json.Marshal(&ApiAuth{Status: status}) if err != nil { @@ -116,8 +123,8 @@ func getContainersExport(srv *Server, w http.ResponseWriter, r *http.Request, va name := vars["name"] if err := srv.ContainerExport(name, w); err != nil { - Debugf("%s", err.Error()) - //return nil, err + utils.Debugf("%s", err.Error()) + return err } return nil } @@ -127,11 +134,13 @@ func getImagesJson(srv *Server, w http.ResponseWriter, r *http.Request, vars map return err } - all := r.Form.Get("all") == "1" + all, err := getBoolParam(r.Form.Get("all")) + if err != nil { + return err + } filter := r.Form.Get("filter") - only_ids := r.Form.Get("only_ids") == "1" - outs, err := srv.Images(all, only_ids, filter) + outs, err := srv.Images(all, filter) if err != nil { return err } @@ -198,9 +207,10 @@ func getContainersPs(srv *Server, w http.ResponseWriter, r *http.Request, vars m if err := parseForm(r); err != nil { return err } - all := r.Form.Get("all") == "1" - trunc_cmd := r.Form.Get("trunc_cmd") != "0" - only_ids := r.Form.Get("only_ids") == "1" + all, err := getBoolParam(r.Form.Get("all")) + if err != nil { + return err + } since := r.Form.Get("since") before := r.Form.Get("before") n, err := strconv.Atoi(r.Form.Get("limit")) @@ -208,7 +218,7 @@ func getContainersPs(srv *Server, w http.ResponseWriter, r *http.Request, vars m n = -1 } - outs := srv.Containers(all, trunc_cmd, only_ids, n, since, before) + outs := srv.Containers(all, n, since, before) b, err := json.Marshal(outs) if err != nil { return err @@ -227,7 +237,10 @@ func postImagesTag(srv *Server, w http.ResponseWriter, r *http.Request, vars map return fmt.Errorf("Missing parameter") } name := vars["name"] - force := r.Form.Get("force") == "1" + force, err := getBoolParam(r.Form.Get("force")) + if err != nil { + return err + } if err := srv.ContainerTag(name, repo, tag, force); err != nil { return err @@ -242,7 +255,7 @@ func postCommit(srv *Server, w http.ResponseWriter, r *http.Request, vars map[st } config := &Config{} if err := json.NewDecoder(r.Body).Decode(config); err != nil { - Debugf("%s", err.Error()) + utils.Debugf("%s", err.Error()) } repo := r.Form.Get("repo") tag := r.Form.Get("tag") @@ -270,23 +283,17 @@ func postImagesCreate(srv *Server, w http.ResponseWriter, r *http.Request, vars src := r.Form.Get("fromSrc") image := r.Form.Get("fromImage") - repo := r.Form.Get("repo") tag := r.Form.Get("tag") + repo := r.Form.Get("repo") - in, out, err := hijackServer(w) - if err != nil { - return err - } - defer in.Close() - fmt.Fprintf(out, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n") if image != "" { //pull registry := r.Form.Get("registry") - if err := srv.ImagePull(image, tag, registry, out); err != nil { - fmt.Fprintf(out, "Error: %s\n", err) + if err := srv.ImagePull(image, tag, registry, w); err != nil { + return err } } else { //import - if err := srv.ImageImport(src, repo, tag, in, out); err != nil { - fmt.Fprintf(out, "Error: %s\n", err) + if err := srv.ImageImport(src, repo, tag, r.Body, w); err != nil { + return err } } return nil @@ -322,15 +329,9 @@ func postImagesInsert(srv *Server, w http.ResponseWriter, r *http.Request, vars } name := vars["name"] - in, out, err := hijackServer(w) - if err != nil { + if err := srv.ImageInsert(name, url, path, w); err != nil { return err } - defer in.Close() - fmt.Fprintf(out, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n") - if err := srv.ImageInsert(name, url, path, out); err != nil { - fmt.Fprintf(out, "Error: %s\n", err) - } return nil } @@ -338,7 +339,6 @@ func postImagesPush(srv *Server, w http.ResponseWriter, r *http.Request, vars ma if err := parseForm(r); err != nil { return err } - registry := r.Form.Get("registry") if vars == nil { @@ -346,28 +346,9 @@ func postImagesPush(srv *Server, w http.ResponseWriter, r *http.Request, vars ma } name := vars["name"] - in, out, err := hijackServer(w) - if err != nil { + if err := srv.ImagePush(name, registry, w); err != nil { return err } - defer in.Close() - fmt.Fprintf(out, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n") - if err := srv.ImagePush(name, registry, out); err != nil { - fmt.Fprintln(out, "Error: %s\n", err) - } - return nil -} - -func postBuild(srv *Server, w http.ResponseWriter, r *http.Request, vars map[string]string) error { - in, out, err := hijackServer(w) - if err != nil { - return err - } - defer in.Close() - fmt.Fprintf(out, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n") - if err := srv.ImageCreateFromFile(in, out); err != nil { - fmt.Fprintln(out, "Error: %s\n", err) - } return nil } @@ -428,7 +409,10 @@ func deleteContainers(srv *Server, w http.ResponseWriter, r *http.Request, vars return fmt.Errorf("Missing parameter") } name := vars["name"] - removeVolume := r.Form.Get("v") == "1" + removeVolume, err := getBoolParam(r.Form.Get("v")) + if err != nil { + return err + } if err := srv.ContainerDestroy(name, removeVolume); err != nil { return err @@ -503,11 +487,27 @@ func postContainersAttach(srv *Server, w http.ResponseWriter, r *http.Request, v if err := parseForm(r); err != nil { return err } - logs := r.Form.Get("logs") == "1" - stream := r.Form.Get("stream") == "1" - stdin := r.Form.Get("stdin") == "1" - stdout := r.Form.Get("stdout") == "1" - stderr := r.Form.Get("stderr") == "1" + logs, err := getBoolParam(r.Form.Get("logs")) + if err != nil { + return err + } + stream, err := getBoolParam(r.Form.Get("stream")) + if err != nil { + return err + } + stdin, err := getBoolParam(r.Form.Get("stdin")) + if err != nil { + return err + } + stdout, err := getBoolParam(r.Form.Get("stdout")) + if err != nil { + return err + } + stderr, err := getBoolParam(r.Form.Get("stderr")) + if err != nil { + return err + } + if vars == nil { return fmt.Errorf("Missing parameter") } @@ -562,6 +562,29 @@ func getImagesByName(srv *Server, w http.ResponseWriter, r *http.Request, vars m return nil } +func postImagesGetCache(srv *Server, w http.ResponseWriter, r *http.Request, vars map[string]string) error { + apiConfig := &ApiImageConfig{} + if err := json.NewDecoder(r.Body).Decode(apiConfig); err != nil { + return err + } + + image, err := srv.ImageGetCached(apiConfig.Id, apiConfig.Config) + if err != nil { + return err + } + if image == nil { + w.WriteHeader(http.StatusNotFound) + return nil + } + apiId := &ApiId{Id: image.Id} + b, err := json.Marshal(apiId) + if err != nil { + return err + } + writeJson(w, b) + return nil +} + func ListenAndServe(addr string, srv *Server, logging bool) error { r := mux.NewRouter() log.Printf("Listening for HTTP on %s\n", addr) @@ -584,11 +607,11 @@ func ListenAndServe(addr string, srv *Server, logging bool) error { "POST": { "/auth": postAuth, "/commit": postCommit, - "/build": postBuild, "/images/create": postImagesCreate, "/images/{name:.*}/insert": postImagesInsert, "/images/{name:.*}/push": postImagesPush, "/images/{name:.*}/tag": postImagesTag, + "/images/getCache": postImagesGetCache, "/containers/create": postContainersCreate, "/containers/{name:.*}/kill": postContainersKill, "/containers/{name:.*}/restart": postContainersRestart, @@ -605,20 +628,20 @@ func ListenAndServe(addr string, srv *Server, logging bool) error { for method, routes := range m { for route, fct := range routes { - Debugf("Registering %s, %s", method, route) + utils.Debugf("Registering %s, %s", method, route) // NOTE: scope issue, make sure the variables are local and won't be changed localRoute := route localMethod := method localFct := fct r.Path(localRoute).Methods(localMethod).HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - Debugf("Calling %s %s", localMethod, localRoute) + utils.Debugf("Calling %s %s", localMethod, localRoute) if logging { log.Println(r.Method, r.RequestURI) } if strings.Contains(r.Header.Get("User-Agent"), "Docker-Client/") { userAgent := strings.Split(r.Header.Get("User-Agent"), "/") if len(userAgent) == 2 && userAgent[1] != VERSION { - Debugf("Warning: client and server don't have the same version (client: %s, server: %s)", userAgent[1], VERSION) + utils.Debugf("Warning: client and server don't have the same version (client: %s, server: %s)", userAgent[1], VERSION) } } if err := localFct(srv, w, r, mux.Vars(r)); err != nil { diff --git a/components/engine/api_params.go b/components/engine/api_params.go index c4942b50e1..234d10fadd 100644 --- a/components/engine/api_params.go +++ b/components/engine/api_params.go @@ -68,3 +68,8 @@ type ApiWait struct { type ApiAuth struct { Status string } + +type ApiImageConfig struct { + Id string + *Config +} diff --git a/components/engine/api_test.go b/components/engine/api_test.go index 2128f3ef35..dd685ffece 100644 --- a/components/engine/api_test.go +++ b/components/engine/api_test.go @@ -6,6 +6,8 @@ import ( "bytes" "encoding/json" "github.com/dotcloud/docker/auth" + "github.com/dotcloud/docker/registry" + "github.com/dotcloud/docker/utils" "io" "net" "net/http" @@ -23,7 +25,10 @@ func TestGetAuth(t *testing.T) { } defer nuke(runtime) - srv := &Server{runtime: runtime} + srv := &Server{ + runtime: runtime, + registry: registry.NewRegistry(runtime.root), + } r := httptest.NewRecorder() @@ -46,13 +51,14 @@ func TestGetAuth(t *testing.T) { if err := postAuth(srv, r, req, nil); err != nil { t.Fatal(err) } + if r.Code != http.StatusOK && r.Code != 0 { t.Fatalf("%d OK or 0 expected, received %d\n", http.StatusOK, r.Code) } - if runtime.authConfig.Username != authConfig.Username || - runtime.authConfig.Password != authConfig.Password || - runtime.authConfig.Email != authConfig.Email { + newAuthConfig := srv.registry.GetAuthConfig() + if newAuthConfig.Username != authConfig.Username || + newAuthConfig.Email != authConfig.Email { t.Fatalf("The auth configuration hasn't been set correctly") } } @@ -115,8 +121,8 @@ func TestGetImagesJson(t *testing.T) { srv := &Server{runtime: runtime} - // only_ids=0&all=0 - req, err := http.NewRequest("GET", "/images/json?only_ids=0&all=0", nil) + // all=0 + req, err := http.NewRequest("GET", "/images/json?all=0", nil) if err != nil { t.Fatal(err) } @@ -142,8 +148,8 @@ func TestGetImagesJson(t *testing.T) { r2 := httptest.NewRecorder() - // only_ids=1&all=1 - req2, err := http.NewRequest("GET", "/images/json?only_ids=1&all=1", nil) + // all=1 + req2, err := http.NewRequest("GET", "/images/json?all=true", nil) if err != nil { t.Fatal(err) } @@ -161,12 +167,8 @@ func TestGetImagesJson(t *testing.T) { t.Errorf("Excepted 1 image, %d found", len(images2)) } - if images2[0].Repository != "" { - t.Errorf("Excepted no image Repository, %s found", images2[0].Repository) - } - - if images2[0].Id != GetTestImage(runtime).ShortId() { - t.Errorf("Retrieved image Id differs, expected %s, received %s", GetTestImage(runtime).ShortId(), images2[0].Id) + if images2[0].Id != GetTestImage(runtime).Id { + t.Errorf("Retrieved image Id differs, expected %s, received %s", GetTestImage(runtime).Id, images2[0].Id) } r3 := httptest.NewRecorder() @@ -189,6 +191,24 @@ func TestGetImagesJson(t *testing.T) { if len(images3) != 0 { t.Errorf("Excepted 1 image, %d found", len(images3)) } + + r4 := httptest.NewRecorder() + + // all=foobar + req4, err := http.NewRequest("GET", "/images/json?all=foobar", nil) + if err != nil { + t.Fatal(err) + } + + err = getImagesJson(srv, r4, req4, nil) + if err == nil { + t.Fatalf("Error expected, received none") + } + + httpError(r4, err) + if r4.Code != http.StatusBadRequest { + t.Fatalf("%d Bad Request expected, received %d\n", http.StatusBadRequest, r4.Code) + } } func TestGetImagesViz(t *testing.T) { @@ -226,7 +246,10 @@ func TestGetImagesSearch(t *testing.T) { } defer nuke(runtime) - srv := &Server{runtime: runtime} + srv := &Server{ + runtime: runtime, + registry: registry.NewRegistry(runtime.root), + } r := httptest.NewRecorder() @@ -329,8 +352,8 @@ func TestGetContainersPs(t *testing.T) { if len(containers) != 1 { t.Fatalf("Excepted %d container, %d found", 1, len(containers)) } - if containers[0].Id != container.ShortId() { - t.Fatalf("Container ID mismatch. Expected: %s, received: %s\n", container.ShortId(), containers[0].Id) + if containers[0].Id != container.Id { + t.Fatalf("Container ID mismatch. Expected: %s, received: %s\n", container.Id, containers[0].Id) } } @@ -480,13 +503,16 @@ func TestPostAuth(t *testing.T) { } defer nuke(runtime) - srv := &Server{runtime: runtime} + srv := &Server{ + runtime: runtime, + registry: registry.NewRegistry(runtime.root), + } authConfigOrig := &auth.AuthConfig{ Username: "utest", Email: "utest@yopmail.com", } - runtime.authConfig = authConfigOrig + srv.registry.ResetClient(authConfigOrig) r := httptest.NewRecorder() if err := getAuth(srv, r, nil, nil); err != nil { @@ -552,56 +578,6 @@ func TestPostCommit(t *testing.T) { } } -func TestPostBuild(t *testing.T) { - runtime, err := newTestRuntime() - if err != nil { - t.Fatal(err) - } - defer nuke(runtime) - - srv := &Server{runtime: runtime} - - stdin, stdinPipe := io.Pipe() - stdout, stdoutPipe := io.Pipe() - - c1 := make(chan struct{}) - go func() { - defer close(c1) - r := &hijackTester{ - ResponseRecorder: httptest.NewRecorder(), - in: stdin, - out: stdoutPipe, - } - - if err := postBuild(srv, r, nil, nil); err != nil { - t.Fatal(err) - } - }() - - // Acknowledge hijack - setTimeout(t, "hijack acknowledge timed out", 2*time.Second, func() { - stdout.Read([]byte{}) - stdout.Read(make([]byte, 4096)) - }) - - setTimeout(t, "read/write assertion timed out", 2*time.Second, func() { - if err := assertPipe("from docker-ut\n", "FROM docker-ut", stdout, stdinPipe, 15); err != nil { - t.Fatal(err) - } - }) - - // Close pipes (client disconnects) - if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil { - t.Fatal(err) - } - - // Wait for build to finish, the client disconnected, therefore, Build finished his job - setTimeout(t, "Waiting for CmdBuild timed out", 2*time.Second, func() { - <-c1 - }) - -} - func TestPostImagesCreate(t *testing.T) { // FIXME: Use the staging in order to perform tests @@ -668,10 +644,82 @@ func TestPostImagesCreate(t *testing.T) { // }) } -// func TestPostImagesInsert(t *testing.T) { -// //FIXME: Implement this test (or remove this endpoint) -// t.Log("Test not implemented") -// } +func TestPostImagesInsert(t *testing.T) { + // runtime, err := newTestRuntime() + // if err != nil { + // t.Fatal(err) + // } + // defer nuke(runtime) + + // srv := &Server{runtime: runtime} + + // stdin, stdinPipe := io.Pipe() + // stdout, stdoutPipe := io.Pipe() + + // // Attach to it + // c1 := make(chan struct{}) + // go func() { + // defer close(c1) + // r := &hijackTester{ + // ResponseRecorder: httptest.NewRecorder(), + // in: stdin, + // out: stdoutPipe, + // } + + // req, err := http.NewRequest("POST", "/images/"+unitTestImageName+"/insert?path=%2Ftest&url=https%3A%2F%2Fraw.github.com%2Fdotcloud%2Fdocker%2Fmaster%2FREADME.md", bytes.NewReader([]byte{})) + // if err != nil { + // t.Fatal(err) + // } + // if err := postContainersCreate(srv, r, req, nil); err != nil { + // t.Fatal(err) + // } + // }() + + // // Acknowledge hijack + // setTimeout(t, "hijack acknowledge timed out", 5*time.Second, func() { + // stdout.Read([]byte{}) + // stdout.Read(make([]byte, 4096)) + // }) + + // id := "" + // setTimeout(t, "Waiting for imagesInsert output", 10*time.Second, func() { + // for { + // reader := bufio.NewReader(stdout) + // id, err = reader.ReadString('\n') + // if err != nil { + // t.Fatal(err) + // } + // } + // }) + + // // Close pipes (client disconnects) + // if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil { + // t.Fatal(err) + // } + + // // Wait for attach to finish, the client disconnected, therefore, Attach finished his job + // setTimeout(t, "Waiting for CmdAttach timed out", 2*time.Second, func() { + // <-c1 + // }) + + // img, err := srv.runtime.repositories.LookupImage(id) + // if err != nil { + // t.Fatalf("New image %s expected but not found", id) + // } + + // layer, err := img.layer() + // if err != nil { + // t.Fatal(err) + // } + + // if _, err := os.Stat(path.Join(layer, "test")); err != nil { + // t.Fatalf("The test file has not been found") + // } + + // if err := srv.runtime.graph.Delete(img.Id); err != nil { + // t.Fatal(err) + // } +} func TestPostImagesPush(t *testing.T) { //FIXME: Use staging in order to perform tests @@ -815,7 +863,7 @@ func TestPostContainersCreate(t *testing.T) { if _, err := os.Stat(path.Join(container.rwPath(), "test")); err != nil { if os.IsNotExist(err) { - Debugf("Err: %s", err) + utils.Debugf("Err: %s", err) t.Fatalf("The test file has not been created") } t.Fatal(err) diff --git a/components/engine/auth/auth.go b/components/engine/auth/auth.go index 5a5987ace8..2b99c95038 100644 --- a/components/engine/auth/auth.go +++ b/components/engine/auth/auth.go @@ -15,13 +15,13 @@ import ( const CONFIGFILE = ".dockercfg" // the registry server we want to login against -const INDEX_SERVER = "https://index.docker.io" +const INDEX_SERVER = "https://index.docker.io/v1" type AuthConfig struct { Username string `json:"username"` Password string `json:"password"` Email string `json:"email"` - rootPath string `json:-` + rootPath string } func NewAuthConfig(username, password, email, rootPath string) *AuthConfig { @@ -33,6 +33,13 @@ func NewAuthConfig(username, password, email, rootPath string) *AuthConfig { } } +func IndexServerAddress() string { + if os.Getenv("DOCKER_INDEX_URL") != "" { + return os.Getenv("DOCKER_INDEX_URL") + "/v1" + } + return INDEX_SERVER +} + // create a base64 encoded auth string to store in config func EncodeAuth(authConfig *AuthConfig) string { authStr := authConfig.Username + ":" + authConfig.Password @@ -119,7 +126,7 @@ func Login(authConfig *AuthConfig) (string, error) { // using `bytes.NewReader(jsonBody)` here causes the server to respond with a 411 status. b := strings.NewReader(string(jsonBody)) - req1, err := http.Post(INDEX_SERVER+"/v1/users/", "application/json; charset=utf-8", b) + req1, err := http.Post(IndexServerAddress()+"/users/", "application/json; charset=utf-8", b) if err != nil { return "", fmt.Errorf("Server Error: %s", err) } @@ -139,7 +146,7 @@ func Login(authConfig *AuthConfig) (string, error) { "Please check your e-mail for a confirmation link.") } else if reqStatusCode == 400 { if string(reqBody) == "\"Username or email already exists\"" { - req, err := http.NewRequest("GET", INDEX_SERVER+"/v1/users/", nil) + req, err := http.NewRequest("GET", IndexServerAddress()+"/users/", nil) req.SetBasicAuth(authConfig.Username, authConfig.Password) resp, err := client.Do(req) if err != nil { diff --git a/components/engine/auth/auth_test.go b/components/engine/auth/auth_test.go index ca584f9314..6c8d032cf7 100644 --- a/components/engine/auth/auth_test.go +++ b/components/engine/auth/auth_test.go @@ -1,6 +1,10 @@ package auth import ( + "crypto/rand" + "encoding/hex" + "os" + "strings" "testing" ) @@ -21,3 +25,49 @@ func TestEncodeAuth(t *testing.T) { t.Fatal("AuthString encoding isn't correct.") } } + +func TestLogin(t *testing.T) { + os.Setenv("DOCKER_INDEX_URL", "https://indexstaging-docker.dotcloud.com") + defer os.Setenv("DOCKER_INDEX_URL", "") + authConfig := NewAuthConfig("unittester", "surlautrerivejetattendrai", "noise+unittester@dotcloud.com", "/tmp") + status, err := Login(authConfig) + if err != nil { + t.Fatal(err) + } + if status != "Login Succeeded\n" { + t.Fatalf("Expected status \"Login Succeeded\", found \"%s\" instead", status) + } +} + +func TestCreateAccount(t *testing.T) { + os.Setenv("DOCKER_INDEX_URL", "https://indexstaging-docker.dotcloud.com") + defer os.Setenv("DOCKER_INDEX_URL", "") + tokenBuffer := make([]byte, 16) + _, err := rand.Read(tokenBuffer) + if err != nil { + t.Fatal(err) + } + token := hex.EncodeToString(tokenBuffer)[:12] + username := "ut" + token + authConfig := NewAuthConfig(username, "test42", "docker-ut+"+token+"@example.com", "/tmp") + status, err := Login(authConfig) + if err != nil { + t.Fatal(err) + } + expectedStatus := "Account created. Please use the confirmation link we sent" + + " to your e-mail to activate it.\n" + if status != expectedStatus { + t.Fatalf("Expected status: \"%s\", found \"%s\" instead.", expectedStatus, status) + } + + status, err = Login(authConfig) + if err == nil { + t.Fatalf("Expected error but found nil instead") + } + + expectedError := "Login: Account is not Active" + + if !strings.Contains(err.Error(), expectedError) { + t.Fatalf("Expected message \"%s\" but found \"%s\" instead", expectedError, err.Error()) + } +} diff --git a/components/engine/buildbot/README.rst b/components/engine/buildbot/README.rst deleted file mode 100644 index a52b9769ef..0000000000 --- a/components/engine/buildbot/README.rst +++ /dev/null @@ -1,20 +0,0 @@ -Buildbot -======== - -Buildbot is a continuous integration system designed to automate the -build/test cycle. By automatically rebuilding and testing the tree each time -something has changed, build problems are pinpointed quickly, before other -developers are inconvenienced by the failure. - -When running 'make hack' at the docker root directory, it spawns a virtual -machine in the background running a buildbot instance and adds a git -post-commit hook that automatically run docker tests for you. - -You can check your buildbot instance at http://192.168.33.21:8010/waterfall - - -Buildbot dependencies ---------------------- - -vagrant, virtualbox packages and python package requests - diff --git a/components/engine/buildbot/Vagrantfile b/components/engine/buildbot/Vagrantfile deleted file mode 100644 index ea027f0666..0000000000 --- a/components/engine/buildbot/Vagrantfile +++ /dev/null @@ -1,28 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -$BUILDBOT_IP = '192.168.33.21' - -def v10(config) - config.vm.box = "quantal64_3.5.0-25" - config.vm.box_url = "http://get.docker.io/vbox/ubuntu/12.10/quantal64_3.5.0-25.box" - config.vm.share_folder 'v-data', '/data/docker', File.dirname(__FILE__) + '/..' - config.vm.network :hostonly, $BUILDBOT_IP - - # Ensure puppet is installed on the instance - config.vm.provision :shell, :inline => 'apt-get -qq update; apt-get install -y puppet' - - config.vm.provision :puppet do |puppet| - puppet.manifests_path = '.' - puppet.manifest_file = 'buildbot.pp' - puppet.options = ['--templatedir','.'] - end -end - -Vagrant::VERSION < '1.1.0' and Vagrant::Config.run do |config| - v10(config) -end - -Vagrant::VERSION >= '1.1.0' and Vagrant.configure('1') do |config| - v10(config) -end diff --git a/components/engine/buildbot/buildbot-cfg/buildbot-cfg.sh b/components/engine/buildbot/buildbot-cfg/buildbot-cfg.sh deleted file mode 100755 index 5e4e7432fd..0000000000 --- a/components/engine/buildbot/buildbot-cfg/buildbot-cfg.sh +++ /dev/null @@ -1,43 +0,0 @@ -#!/bin/bash - -# Auto setup of buildbot configuration. Package installation is being done -# on buildbot.pp -# Dependencies: buildbot, buildbot-slave, supervisor - -SLAVE_NAME='buildworker' -SLAVE_SOCKET='localhost:9989' -BUILDBOT_PWD='pass-docker' -USER='vagrant' -ROOT_PATH='/data/buildbot' -DOCKER_PATH='/data/docker' -BUILDBOT_CFG="$DOCKER_PATH/buildbot/buildbot-cfg" -IP=$(grep BUILDBOT_IP /data/docker/buildbot/Vagrantfile | awk -F "'" '{ print $2; }') - -function run { su $USER -c "$1"; } - -export PATH=/bin:sbin:/usr/bin:/usr/sbin:/usr/local/bin - -# Exit if buildbot has already been installed -[ -d "$ROOT_PATH" ] && exit 0 - -# Setup buildbot -run "mkdir -p ${ROOT_PATH}" -cd ${ROOT_PATH} -run "buildbot create-master master" -run "cp $BUILDBOT_CFG/master.cfg master" -run "sed -i 's/localhost/$IP/' master/master.cfg" -run "buildslave create-slave slave $SLAVE_SOCKET $SLAVE_NAME $BUILDBOT_PWD" - -# Allow buildbot subprocesses (docker tests) to properly run in containers, -# in particular with docker -u -run "sed -i 's/^umask = None/umask = 000/' ${ROOT_PATH}/slave/buildbot.tac" - -# Setup supervisor -cp $BUILDBOT_CFG/buildbot.conf /etc/supervisor/conf.d/buildbot.conf -sed -i "s/^chmod=0700.*0700./chmod=0770\nchown=root:$USER/" /etc/supervisor/supervisord.conf -kill -HUP `pgrep -f "/usr/bin/python /usr/bin/supervisord"` - -# Add git hook -cp $BUILDBOT_CFG/post-commit $DOCKER_PATH/.git/hooks -sed -i "s/localhost/$IP/" $DOCKER_PATH/.git/hooks/post-commit - diff --git a/components/engine/buildbot/buildbot.pp b/components/engine/buildbot/buildbot.pp deleted file mode 100644 index 8109cdc2a0..0000000000 --- a/components/engine/buildbot/buildbot.pp +++ /dev/null @@ -1,32 +0,0 @@ -node default { - $USER = 'vagrant' - $ROOT_PATH = '/data/buildbot' - $DOCKER_PATH = '/data/docker' - - exec {'apt_update': command => '/usr/bin/apt-get update' } - Package { require => Exec['apt_update'] } - group {'puppet': ensure => 'present'} - - # Install dependencies - Package { ensure => 'installed' } - package { ['python-dev','python-pip','supervisor','lxc','bsdtar','git','golang']: } - - file{[ '/data' ]: - owner => $USER, group => $USER, ensure => 'directory' } - - file {'/var/tmp/requirements.txt': - content => template('requirements.txt') } - - exec {'requirements': - require => [ Package['python-dev'], Package['python-pip'], - File['/var/tmp/requirements.txt'] ], - cwd => '/var/tmp', - command => "/bin/sh -c '(/usr/bin/pip install -r requirements.txt; - rm /var/tmp/requirements.txt)'" } - - exec {'buildbot-cfg-sh': - require => [ Package['supervisor'], Exec['requirements']], - path => '/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin', - cwd => '/data', - command => "$DOCKER_PATH/buildbot/buildbot-cfg/buildbot-cfg.sh" } -} diff --git a/components/engine/builder.go b/components/engine/builder.go index cae8b55827..5f56f65d05 100644 --- a/components/engine/builder.go +++ b/components/engine/builder.go @@ -1,13 +1,9 @@ package docker import ( - "bufio" - "encoding/json" "fmt" - "io" "os" "path" - "strings" "time" ) @@ -15,6 +11,9 @@ type Builder struct { runtime *Runtime repositories *TagStore graph *Graph + + config *Config + image *Image } func NewBuilder(runtime *Runtime) *Builder { @@ -25,42 +24,6 @@ func NewBuilder(runtime *Runtime) *Builder { } } -func (builder *Builder) mergeConfig(userConf, imageConf *Config) { - if userConf.Hostname != "" { - userConf.Hostname = imageConf.Hostname - } - if userConf.User != "" { - userConf.User = imageConf.User - } - if userConf.Memory == 0 { - userConf.Memory = imageConf.Memory - } - if userConf.MemorySwap == 0 { - userConf.MemorySwap = imageConf.MemorySwap - } - if userConf.PortSpecs == nil || len(userConf.PortSpecs) == 0 { - userConf.PortSpecs = imageConf.PortSpecs - } - if !userConf.Tty { - userConf.Tty = userConf.Tty - } - if !userConf.OpenStdin { - userConf.OpenStdin = imageConf.OpenStdin - } - if !userConf.StdinOnce { - userConf.StdinOnce = imageConf.StdinOnce - } - if userConf.Env == nil || len(userConf.Env) == 0 { - userConf.Env = imageConf.Env - } - if userConf.Cmd == nil || len(userConf.Cmd) == 0 { - userConf.Cmd = imageConf.Cmd - } - if userConf.Dns == nil || len(userConf.Dns) == 0 { - userConf.Dns = imageConf.Dns - } -} - func (builder *Builder) Create(config *Config) (*Container, error) { // Lookup image img, err := builder.repositories.LookupImage(config.Image) @@ -69,7 +32,7 @@ func (builder *Builder) Create(config *Config) (*Container, error) { } if img.Config != nil { - builder.mergeConfig(config, img.Config) + MergeConfig(config, img.Config) } if config.Cmd == nil || len(config.Cmd) == 0 { @@ -153,311 +116,3 @@ func (builder *Builder) Commit(container *Container, repository, tag, comment, a } return img, nil } - -func (builder *Builder) clearTmp(containers, images map[string]struct{}) { - for c := range containers { - tmp := builder.runtime.Get(c) - builder.runtime.Destroy(tmp) - Debugf("Removing container %s", c) - } - for i := range images { - builder.runtime.graph.Delete(i) - Debugf("Removing image %s", i) - } -} - -func (builder *Builder) getCachedImage(image *Image, config *Config) (*Image, error) { - // Retrieve all images - images, err := builder.graph.All() - if err != nil { - return nil, err - } - - // Store the tree in a map of map (map[parentId][childId]) - imageMap := make(map[string]map[string]struct{}) - for _, img := range images { - if _, exists := imageMap[img.Parent]; !exists { - imageMap[img.Parent] = make(map[string]struct{}) - } - imageMap[img.Parent][img.Id] = struct{}{} - } - - // Loop on the children of the given image and check the config - for elem := range imageMap[image.Id] { - img, err := builder.graph.Get(elem) - if err != nil { - return nil, err - } - if CompareConfig(&img.ContainerConfig, config) { - return img, nil - } - } - return nil, nil -} - -func (builder *Builder) Build(dockerfile io.Reader, stdout io.Writer) (*Image, error) { - var ( - image, base *Image - config *Config - maintainer string - env map[string]string = make(map[string]string) - tmpContainers map[string]struct{} = make(map[string]struct{}) - tmpImages map[string]struct{} = make(map[string]struct{}) - ) - defer builder.clearTmp(tmpContainers, tmpImages) - - file := bufio.NewReader(dockerfile) - for { - line, err := file.ReadString('\n') - if err != nil { - if err == io.EOF { - break - } - return nil, err - } - line = strings.Replace(strings.TrimSpace(line), " ", " ", 1) - // Skip comments and empty line - if len(line) == 0 || line[0] == '#' { - continue - } - tmp := strings.SplitN(line, " ", 2) - if len(tmp) != 2 { - return nil, fmt.Errorf("Invalid Dockerfile format") - } - instruction := strings.Trim(tmp[0], " ") - arguments := strings.Trim(tmp[1], " ") - switch strings.ToLower(instruction) { - case "from": - fmt.Fprintf(stdout, "FROM %s\n", arguments) - image, err = builder.runtime.repositories.LookupImage(arguments) - if err != nil { - if builder.runtime.graph.IsNotExist(err) { - - var tag, remote string - if strings.Contains(arguments, ":") { - remoteParts := strings.Split(arguments, ":") - tag = remoteParts[1] - remote = remoteParts[0] - } else { - remote = arguments - } - - if err := builder.runtime.graph.PullRepository(stdout, remote, tag, builder.runtime.repositories, builder.runtime.authConfig); err != nil { - return nil, err - } - - image, err = builder.runtime.repositories.LookupImage(arguments) - if err != nil { - return nil, err - } - } else { - return nil, err - } - } - config = &Config{} - - break - case "maintainer": - fmt.Fprintf(stdout, "MAINTAINER %s\n", arguments) - maintainer = arguments - break - case "run": - fmt.Fprintf(stdout, "RUN %s\n", arguments) - if image == nil { - return nil, fmt.Errorf("Please provide a source image with `from` prior to run") - } - config, _, err := ParseRun([]string{image.Id, "/bin/sh", "-c", arguments}, builder.runtime.capabilities) - if err != nil { - return nil, err - } - - for key, value := range env { - config.Env = append(config.Env, fmt.Sprintf("%s=%s", key, value)) - } - - if cache, err := builder.getCachedImage(image, config); err != nil { - return nil, err - } else if cache != nil { - image = cache - fmt.Fprintf(stdout, "===> %s\n", image.ShortId()) - break - } - - Debugf("Env -----> %v ------ %v\n", config.Env, env) - - // Create the container and start it - c, err := builder.Create(config) - if err != nil { - return nil, err - } - - if os.Getenv("DEBUG") != "" { - out, _ := c.StdoutPipe() - err2, _ := c.StderrPipe() - go io.Copy(os.Stdout, out) - go io.Copy(os.Stdout, err2) - } - - if err := c.Start(); err != nil { - return nil, err - } - tmpContainers[c.Id] = struct{}{} - - // Wait for it to finish - if result := c.Wait(); result != 0 { - return nil, fmt.Errorf("!!! '%s' return non-zero exit code '%d'. Aborting.", arguments, result) - } - - // Commit the container - base, err = builder.Commit(c, "", "", "", maintainer, nil) - if err != nil { - return nil, err - } - tmpImages[base.Id] = struct{}{} - - fmt.Fprintf(stdout, "===> %s\n", base.ShortId()) - - // use the base as the new image - image = base - - break - case "env": - tmp := strings.SplitN(arguments, " ", 2) - if len(tmp) != 2 { - return nil, fmt.Errorf("Invalid ENV format") - } - key := strings.Trim(tmp[0], " ") - value := strings.Trim(tmp[1], " ") - fmt.Fprintf(stdout, "ENV %s %s\n", key, value) - env[key] = value - if image != nil { - fmt.Fprintf(stdout, "===> %s\n", image.ShortId()) - } else { - fmt.Fprintf(stdout, "===> \n") - } - break - case "cmd": - fmt.Fprintf(stdout, "CMD %s\n", arguments) - - // Create the container and start it - c, err := builder.Create(&Config{Image: image.Id, Cmd: []string{"", ""}}) - if err != nil { - return nil, err - } - if err := c.Start(); err != nil { - return nil, err - } - tmpContainers[c.Id] = struct{}{} - - cmd := []string{} - if err := json.Unmarshal([]byte(arguments), &cmd); err != nil { - return nil, err - } - config.Cmd = cmd - - // Commit the container - base, err = builder.Commit(c, "", "", "", maintainer, config) - if err != nil { - return nil, err - } - tmpImages[base.Id] = struct{}{} - - fmt.Fprintf(stdout, "===> %s\n", base.ShortId()) - image = base - break - case "expose": - ports := strings.Split(arguments, " ") - - fmt.Fprintf(stdout, "EXPOSE %v\n", ports) - if image == nil { - return nil, fmt.Errorf("Please provide a source image with `from` prior to copy") - } - - // Create the container and start it - c, err := builder.Create(&Config{Image: image.Id, Cmd: []string{"", ""}}) - if err != nil { - return nil, err - } - if err := c.Start(); err != nil { - return nil, err - } - tmpContainers[c.Id] = struct{}{} - - config.PortSpecs = append(ports, config.PortSpecs...) - - // Commit the container - base, err = builder.Commit(c, "", "", "", maintainer, config) - if err != nil { - return nil, err - } - tmpImages[base.Id] = struct{}{} - - fmt.Fprintf(stdout, "===> %s\n", base.ShortId()) - image = base - break - case "insert": - if image == nil { - return nil, fmt.Errorf("Please provide a source image with `from` prior to copy") - } - tmp = strings.SplitN(arguments, " ", 2) - if len(tmp) != 2 { - return nil, fmt.Errorf("Invalid INSERT format") - } - sourceUrl := strings.Trim(tmp[0], " ") - destPath := strings.Trim(tmp[1], " ") - fmt.Fprintf(stdout, "COPY %s to %s in %s\n", sourceUrl, destPath, base.ShortId()) - - file, err := Download(sourceUrl, stdout) - if err != nil { - return nil, err - } - defer file.Body.Close() - - config, _, err := ParseRun([]string{base.Id, "echo", "insert", sourceUrl, destPath}, builder.runtime.capabilities) - if err != nil { - return nil, err - } - c, err := builder.Create(config) - if err != nil { - return nil, err - } - - if err := c.Start(); err != nil { - return nil, err - } - - // Wait for echo to finish - if result := c.Wait(); result != 0 { - return nil, fmt.Errorf("!!! '%s' return non-zero exit code '%d'. Aborting.", arguments, result) - } - - if err := c.Inject(file.Body, destPath); err != nil { - return nil, err - } - - base, err = builder.Commit(c, "", "", "", maintainer, nil) - if err != nil { - return nil, err - } - fmt.Fprintf(stdout, "===> %s\n", base.ShortId()) - - image = base - - break - default: - fmt.Fprintf(stdout, "Skipping unknown instruction %s\n", strings.ToUpper(instruction)) - } - } - if image != nil { - // The build is successful, keep the temporary containers and images - for i := range tmpImages { - delete(tmpImages, i) - } - for i := range tmpContainers { - delete(tmpContainers, i) - } - fmt.Fprintf(stdout, "Build finished. image id: %s\n", image.ShortId()) - return image, nil - } - return nil, fmt.Errorf("An error occured during the build\n") -} diff --git a/components/engine/builder_client.go b/components/engine/builder_client.go new file mode 100644 index 0000000000..ceeab002c9 --- /dev/null +++ b/components/engine/builder_client.go @@ -0,0 +1,311 @@ +package docker + +import ( + "bufio" + "encoding/json" + "fmt" + "github.com/dotcloud/docker/utils" + "io" + "net/url" + "os" + "reflect" + "strings" +) + +type BuilderClient interface { + Build(io.Reader) (string, error) + CmdFrom(string) error + CmdRun(string) error +} + +type builderClient struct { + cli *DockerCli + + image string + maintainer string + config *Config + + tmpContainers map[string]struct{} + tmpImages map[string]struct{} + + needCommit bool +} + +func (b *builderClient) clearTmp(containers, images map[string]struct{}) { + for c := range containers { + if _, _, err := b.cli.call("DELETE", "/containers/"+c, nil); err != nil { + utils.Debugf("%s", err) + } + utils.Debugf("Removing container %s", c) + } + for i := range images { + if _, _, err := b.cli.call("DELETE", "/images/"+i, nil); err != nil { + utils.Debugf("%s", err) + } + utils.Debugf("Removing image %s", i) + } +} + +func (b *builderClient) CmdFrom(name string) error { + obj, statusCode, err := b.cli.call("GET", "/images/"+name+"/json", nil) + if statusCode == 404 { + + remote := name + var tag string + if strings.Contains(remote, ":") { + remoteParts := strings.Split(remote, ":") + tag = remoteParts[1] + remote = remoteParts[0] + } + var out io.Writer + if os.Getenv("DEBUG") != "" { + out = os.Stdout + } else { + out = &utils.NopWriter{} + } + if err := b.cli.stream("POST", "/images/create?fromImage="+remote+"&tag="+tag, nil, out); err != nil { + return err + } + obj, _, err = b.cli.call("GET", "/images/"+name+"/json", nil) + if err != nil { + return err + } + } + if err != nil { + return err + } + + img := &ApiId{} + if err := json.Unmarshal(obj, img); err != nil { + return err + } + b.image = img.Id + utils.Debugf("Using image %s", b.image) + return nil +} + +func (b *builderClient) CmdMaintainer(name string) error { + b.needCommit = true + b.maintainer = name + return nil +} + +func (b *builderClient) CmdRun(args string) error { + if b.image == "" { + return fmt.Errorf("Please provide a source image with `from` prior to run") + } + config, _, err := ParseRun([]string{b.image, "/bin/sh", "-c", args}, nil) + if err != nil { + return err + } + + cmd, env := b.config.Cmd, b.config.Env + b.config.Cmd = nil + MergeConfig(b.config, config) + + body, statusCode, err := b.cli.call("POST", "/images/getCache", &ApiImageConfig{Id: b.image, Config: b.config}) + if err != nil { + if statusCode != 404 { + return err + } + } + if statusCode != 404 { + apiId := &ApiId{} + if err := json.Unmarshal(body, apiId); err != nil { + return err + } + utils.Debugf("Use cached version") + b.image = apiId.Id + return nil + } + cid, err := b.run() + if err != nil { + return err + } + b.config.Cmd, b.config.Env = cmd, env + return b.commit(cid) +} + +func (b *builderClient) CmdEnv(args string) error { + b.needCommit = true + tmp := strings.SplitN(args, " ", 2) + if len(tmp) != 2 { + return fmt.Errorf("Invalid ENV format") + } + key := strings.Trim(tmp[0], " ") + value := strings.Trim(tmp[1], " ") + + for i, elem := range b.config.Env { + if strings.HasPrefix(elem, key+"=") { + b.config.Env[i] = key + "=" + value + return nil + } + } + b.config.Env = append(b.config.Env, key+"="+value) + return nil +} + +func (b *builderClient) CmdCmd(args string) error { + b.needCommit = true + var cmd []string + if err := json.Unmarshal([]byte(args), &cmd); err != nil { + utils.Debugf("Error unmarshalling: %s, using /bin/sh -c", err) + b.config.Cmd = []string{"/bin/sh", "-c", args} + } else { + b.config.Cmd = cmd + } + return nil +} + +func (b *builderClient) CmdExpose(args string) error { + ports := strings.Split(args, " ") + b.config.PortSpecs = append(ports, b.config.PortSpecs...) + return nil +} + +func (b *builderClient) CmdInsert(args string) error { + // FIXME: Reimplement this once the remove_hijack branch gets merged. + // We need to retrieve the resulting Id + return fmt.Errorf("INSERT not implemented") +} + +func (b *builderClient) run() (string, error) { + if b.image == "" { + return "", fmt.Errorf("Please provide a source image with `from` prior to run") + } + b.config.Image = b.image + body, _, err := b.cli.call("POST", "/containers/create", b.config) + if err != nil { + return "", err + } + + apiRun := &ApiRun{} + if err := json.Unmarshal(body, apiRun); err != nil { + return "", err + } + for _, warning := range apiRun.Warnings { + fmt.Fprintln(os.Stderr, "WARNING: ", warning) + } + + //start the container + _, _, err = b.cli.call("POST", "/containers/"+apiRun.Id+"/start", nil) + if err != nil { + return "", err + } + b.tmpContainers[apiRun.Id] = struct{}{} + + // Wait for it to finish + body, _, err = b.cli.call("POST", "/containers/"+apiRun.Id+"/wait", nil) + if err != nil { + return "", err + } + apiWait := &ApiWait{} + if err := json.Unmarshal(body, apiWait); err != nil { + return "", err + } + if apiWait.StatusCode != 0 { + return "", fmt.Errorf("The command %v returned a non-zero code: %d", b.config.Cmd, apiWait.StatusCode) + } + + return apiRun.Id, nil +} + +func (b *builderClient) commit(id string) error { + if b.image == "" { + return fmt.Errorf("Please provide a source image with `from` prior to run") + } + b.config.Image = b.image + + if id == "" { + cmd := b.config.Cmd + b.config.Cmd = []string{"true"} + if cid, err := b.run(); err != nil { + return err + } else { + id = cid + } + b.config.Cmd = cmd + } + + // Commit the container + v := url.Values{} + v.Set("container", id) + v.Set("author", b.maintainer) + + body, _, err := b.cli.call("POST", "/commit?"+v.Encode(), b.config) + if err != nil { + return err + } + apiId := &ApiId{} + if err := json.Unmarshal(body, apiId); err != nil { + return err + } + b.tmpImages[apiId.Id] = struct{}{} + b.image = apiId.Id + b.needCommit = false + return nil +} + +func (b *builderClient) Build(dockerfile io.Reader) (string, error) { + defer b.clearTmp(b.tmpContainers, b.tmpImages) + file := bufio.NewReader(dockerfile) + for { + line, err := file.ReadString('\n') + if err != nil { + if err == io.EOF { + break + } + return "", err + } + line = strings.Replace(strings.TrimSpace(line), " ", " ", 1) + // Skip comments and empty line + if len(line) == 0 || line[0] == '#' { + continue + } + tmp := strings.SplitN(line, " ", 2) + if len(tmp) != 2 { + return "", fmt.Errorf("Invalid Dockerfile format") + } + instruction := strings.ToLower(strings.Trim(tmp[0], " ")) + arguments := strings.Trim(tmp[1], " ") + + fmt.Printf("%s %s (%s)\n", strings.ToUpper(instruction), arguments, b.image) + + method, exists := reflect.TypeOf(b).MethodByName("Cmd" + strings.ToUpper(instruction[:1]) + strings.ToLower(instruction[1:])) + if !exists { + fmt.Printf("Skipping unknown instruction %s\n", strings.ToUpper(instruction)) + } + ret := method.Func.Call([]reflect.Value{reflect.ValueOf(b), reflect.ValueOf(arguments)})[0].Interface() + if ret != nil { + return "", ret.(error) + } + + fmt.Printf("===> %v\n", b.image) + } + if b.needCommit { + if err := b.commit(""); err != nil { + return "", err + } + } + if b.image != "" { + // The build is successful, keep the temporary containers and images + for i := range b.tmpImages { + delete(b.tmpImages, i) + } + for i := range b.tmpContainers { + delete(b.tmpContainers, i) + } + fmt.Printf("Build finished. image id: %s\n", b.image) + return b.image, nil + } + return "", fmt.Errorf("An error occured during the build\n") +} + +func NewBuilderClient(addr string, port int) BuilderClient { + return &builderClient{ + cli: NewDockerCli(addr, port), + config: &Config{}, + tmpContainers: make(map[string]struct{}), + tmpImages: make(map[string]struct{}), + } +} diff --git a/components/engine/builder_test.go b/components/engine/builder_test.go deleted file mode 100644 index 08b7dd58cc..0000000000 --- a/components/engine/builder_test.go +++ /dev/null @@ -1,88 +0,0 @@ -package docker - -import ( - "strings" - "testing" -) - -const Dockerfile = ` -# VERSION 0.1 -# DOCKER-VERSION 0.2 - -from ` + unitTestImageName + ` -run sh -c 'echo root:testpass > /tmp/passwd' -run mkdir -p /var/run/sshd -insert https://raw.github.com/dotcloud/docker/master/CHANGELOG.md /tmp/CHANGELOG.md -` - -func TestBuild(t *testing.T) { - runtime, err := newTestRuntime() - if err != nil { - t.Fatal(err) - } - defer nuke(runtime) - - builder := NewBuilder(runtime) - - img, err := builder.Build(strings.NewReader(Dockerfile), &nopWriter{}) - if err != nil { - t.Fatal(err) - } - - container, err := builder.Create( - &Config{ - Image: img.Id, - Cmd: []string{"cat", "/tmp/passwd"}, - }, - ) - if err != nil { - t.Fatal(err) - } - defer runtime.Destroy(container) - - output, err := container.Output() - if err != nil { - t.Fatal(err) - } - if string(output) != "root:testpass\n" { - t.Fatalf("Unexpected output. Read '%s', expected '%s'", output, "root:testpass\n") - } - - container2, err := builder.Create( - &Config{ - Image: img.Id, - Cmd: []string{"ls", "-d", "/var/run/sshd"}, - }, - ) - if err != nil { - t.Fatal(err) - } - defer runtime.Destroy(container2) - - output, err = container2.Output() - if err != nil { - t.Fatal(err) - } - if string(output) != "/var/run/sshd\n" { - t.Fatal("/var/run/sshd has not been created") - } - - container3, err := builder.Create( - &Config{ - Image: img.Id, - Cmd: []string{"cat", "/tmp/CHANGELOG.md"}, - }, - ) - if err != nil { - t.Fatal(err) - } - defer runtime.Destroy(container3) - - output, err = container3.Output() - if err != nil { - t.Fatal(err) - } - if len(output) == 0 { - t.Fatal("/tmp/CHANGELOG.md has not been copied") - } -} diff --git a/components/engine/commands.go b/components/engine/commands.go index 736aee2f7a..dfb123ef96 100644 --- a/components/engine/commands.go +++ b/components/engine/commands.go @@ -7,6 +7,7 @@ import ( "fmt" "github.com/dotcloud/docker/auth" "github.com/dotcloud/docker/term" + "github.com/dotcloud/docker/utils" "io" "io/ioutil" "net" @@ -15,6 +16,7 @@ import ( "net/url" "os" "path/filepath" + "reflect" "strconv" "strings" "text/tabwriter" @@ -29,88 +31,66 @@ var ( ) func ParseCommands(args ...string) error { - - cmds := map[string]func(args ...string) error{ - "attach": CmdAttach, - "build": CmdBuild, - "commit": CmdCommit, - "diff": CmdDiff, - "export": CmdExport, - "images": CmdImages, - "info": CmdInfo, - "insert": CmdInsert, - "inspect": CmdInspect, - "import": CmdImport, - "history": CmdHistory, - "kill": CmdKill, - "login": CmdLogin, - "logs": CmdLogs, - "port": CmdPort, - "ps": CmdPs, - "pull": CmdPull, - "push": CmdPush, - "restart": CmdRestart, - "rm": CmdRm, - "rmi": CmdRmi, - "run": CmdRun, - "tag": CmdTag, - "search": CmdSearch, - "start": CmdStart, - "stop": CmdStop, - "version": CmdVersion, - "wait": CmdWait, - } + cli := NewDockerCli("0.0.0.0", 4243) if len(args) > 0 { - cmd, exists := cmds[args[0]] + methodName := "Cmd" + strings.ToUpper(args[0][:1]) + strings.ToLower(args[0][1:]) + method, exists := reflect.TypeOf(cli).MethodByName(methodName) if !exists { fmt.Println("Error: Command not found:", args[0]) - return cmdHelp(args...) + return cli.CmdHelp(args...) } - return cmd(args[1:]...) + ret := method.Func.CallSlice([]reflect.Value{ + reflect.ValueOf(cli), + reflect.ValueOf(args[1:]), + })[0].Interface() + if ret == nil { + return nil + } + return ret.(error) } - return cmdHelp(args...) + return cli.CmdHelp(args...) } -func cmdHelp(args ...string) error { +func (cli *DockerCli) CmdHelp(args ...string) error { help := "Usage: docker COMMAND [arg...]\n\nA self-sufficient runtime for linux containers.\n\nCommands:\n" - for _, cmd := range [][]string{ - {"attach", "Attach to a running container"}, - {"build", "Build a container from Dockerfile via stdin"}, - {"commit", "Create a new image from a container's changes"}, - {"diff", "Inspect changes on a container's filesystem"}, - {"export", "Stream the contents of a container as a tar archive"}, - {"history", "Show the history of an image"}, - {"images", "List images"}, - {"import", "Create a new filesystem image from the contents of a tarball"}, - {"info", "Display system-wide information"}, - {"insert", "Insert a file in an image"}, - {"inspect", "Return low-level information on a container"}, - {"kill", "Kill a running container"}, - {"login", "Register or Login to the docker registry server"}, - {"logs", "Fetch the logs of a container"}, - {"port", "Lookup the public-facing port which is NAT-ed to PRIVATE_PORT"}, - {"ps", "List containers"}, - {"pull", "Pull an image or a repository from the docker registry server"}, - {"push", "Push an image or a repository to the docker registry server"}, - {"restart", "Restart a running container"}, - {"rm", "Remove a container"}, - {"rmi", "Remove an image"}, - {"run", "Run a command in a new container"}, - {"search", "Search for an image in the docker index"}, - {"start", "Start a stopped container"}, - {"stop", "Stop a running container"}, - {"tag", "Tag an image into a repository"}, - {"version", "Show the docker version information"}, - {"wait", "Block until a container stops, then print its exit code"}, + for cmd, description := range map[string]string{ + "attach": "Attach to a running container", + "build": "Build a container from Dockerfile or via stdin", + "commit": "Create a new image from a container's changes", + "diff": "Inspect changes on a container's filesystem", + "export": "Stream the contents of a container as a tar archive", + "history": "Show the history of an image", + "images": "List images", + "import": "Create a new filesystem image from the contents of a tarball", + "info": "Display system-wide information", + "insert": "Insert a file in an image", + "inspect": "Return low-level information on a container", + "kill": "Kill a running container", + "login": "Register or Login to the docker registry server", + "logs": "Fetch the logs of a container", + "port": "Lookup the public-facing port which is NAT-ed to PRIVATE_PORT", + "ps": "List containers", + "pull": "Pull an image or a repository from the docker registry server", + "push": "Push an image or a repository to the docker registry server", + "restart": "Restart a running container", + "rm": "Remove a container", + "rmi": "Remove an image", + "run": "Run a command in a new container", + "search": "Search for an image in the docker index", + "start": "Start a stopped container", + "stop": "Stop a running container", + "tag": "Tag an image into a repository", + "version": "Show the docker version information", + "wait": "Block until a container stops, then print its exit code", } { - help += fmt.Sprintf(" %-10.10s%s\n", cmd[0], cmd[1]) + help += fmt.Sprintf(" %-10.10s%s\n", cmd, description) } fmt.Println(help) return nil } -func CmdInsert(args ...string) error { +func (cli *DockerCli) CmdInsert(args ...string) error { cmd := Subcmd("insert", "IMAGE URL PATH", "Insert a file from URL in the IMAGE at PATH") if err := cmd.Parse(args); err != nil { return nil @@ -124,28 +104,44 @@ func CmdInsert(args ...string) error { v.Set("url", cmd.Arg(1)) v.Set("path", cmd.Arg(2)) - err := hijack("POST", "/images/"+cmd.Arg(0)+"?"+v.Encode(), false) + err := cli.stream("POST", "/images/"+cmd.Arg(0)+"/insert?"+v.Encode(), nil, os.Stdout) if err != nil { return err } return nil } -func CmdBuild(args ...string) error { - cmd := Subcmd("build", "-", "Build an image from Dockerfile via stdin") +func (cli *DockerCli) CmdBuild(args ...string) error { + cmd := Subcmd("build", "-|Dockerfile", "Build an image from Dockerfile or via stdin") if err := cmd.Parse(args); err != nil { return nil } + var ( + file io.ReadCloser + err error + ) - err := hijack("POST", "/build", false) - if err != nil { + if cmd.NArg() == 0 { + file, err = os.Open("Dockerfile") + if err != nil { + return err + } + } else if cmd.Arg(0) == "-" { + file = os.Stdin + } else { + file, err = os.Open(cmd.Arg(0)) + if err != nil { + return err + } + } + if _, err := NewBuilderClient("0.0.0.0", 4243).Build(file); err != nil { return err } return nil } // 'docker login': login / register a user to registry service. -func CmdLogin(args ...string) error { +func (cli *DockerCli) CmdLogin(args ...string) error { var readStringOnRawTerminal = func(stdin io.Reader, stdout io.Writer, echo bool) string { char := make([]byte, 1) buffer := make([]byte, 64) @@ -188,11 +184,11 @@ func CmdLogin(args ...string) error { return readStringOnRawTerminal(stdin, stdout, false) } - oldState, err := SetRawTerminal() + oldState, err := term.SetRawTerminal() if err != nil { return err } else { - defer RestoreTerminal(oldState) + defer term.RestoreTerminal(oldState) } cmd := Subcmd("login", "", "Register or Login to the docker registry server") @@ -200,7 +196,7 @@ func CmdLogin(args ...string) error { return nil } - body, _, err := call("GET", "/auth", nil) + body, _, err := cli.call("GET", "/auth", nil) if err != nil { return err } @@ -241,7 +237,7 @@ func CmdLogin(args ...string) error { out.Password = password out.Email = email - body, _, err = call("POST", "/auth", out) + body, _, err = cli.call("POST", "/auth", out) if err != nil { return err } @@ -252,14 +248,14 @@ func CmdLogin(args ...string) error { return err } if out2.Status != "" { - RestoreTerminal(oldState) + term.RestoreTerminal(oldState) fmt.Print(out2.Status) } return nil } // 'docker wait': block until a container stops -func CmdWait(args ...string) error { +func (cli *DockerCli) CmdWait(args ...string) error { cmd := Subcmd("wait", "CONTAINER [CONTAINER...]", "Block until a container stops, then print its exit code.") if err := cmd.Parse(args); err != nil { return nil @@ -269,7 +265,7 @@ func CmdWait(args ...string) error { return nil } for _, name := range cmd.Args() { - body, _, err := call("POST", "/containers/"+name+"/wait", nil) + body, _, err := cli.call("POST", "/containers/"+name+"/wait", nil) if err != nil { fmt.Printf("%s", err) } else { @@ -285,17 +281,20 @@ func CmdWait(args ...string) error { } // 'docker version': show version information -func CmdVersion(args ...string) error { +func (cli *DockerCli) CmdVersion(args ...string) error { cmd := Subcmd("version", "", "Show the docker version information.") + fmt.Println(len(args)) if err := cmd.Parse(args); err != nil { return nil } + + fmt.Println(cmd.NArg()) if cmd.NArg() > 0 { cmd.Usage() return nil } - body, _, err := call("GET", "/version", nil) + body, _, err := cli.call("GET", "/version", nil) if err != nil { return err } @@ -303,7 +302,7 @@ func CmdVersion(args ...string) error { var out ApiVersion err = json.Unmarshal(body, &out) if err != nil { - Debugf("Error unmarshal: body: %s, err: %s\n", body, err) + utils.Debugf("Error unmarshal: body: %s, err: %s\n", body, err) return err } fmt.Println("Version:", out.Version) @@ -319,7 +318,7 @@ func CmdVersion(args ...string) error { } // 'docker info': display system-wide information. -func CmdInfo(args ...string) error { +func (cli *DockerCli) CmdInfo(args ...string) error { cmd := Subcmd("info", "", "Display system-wide information") if err := cmd.Parse(args); err != nil { return nil @@ -329,7 +328,7 @@ func CmdInfo(args ...string) error { return nil } - body, _, err := call("GET", "/info", nil) + body, _, err := cli.call("GET", "/info", nil) if err != nil { return err } @@ -347,7 +346,7 @@ func CmdInfo(args ...string) error { return nil } -func CmdStop(args ...string) error { +func (cli *DockerCli) CmdStop(args ...string) error { cmd := Subcmd("stop", "[OPTIONS] CONTAINER [CONTAINER...]", "Stop a running container") nSeconds := cmd.Int("t", 10, "wait t seconds before killing the container") if err := cmd.Parse(args); err != nil { @@ -362,7 +361,7 @@ func CmdStop(args ...string) error { v.Set("t", strconv.Itoa(*nSeconds)) for _, name := range cmd.Args() { - _, _, err := call("POST", "/containers/"+name+"/stop?"+v.Encode(), nil) + _, _, err := cli.call("POST", "/containers/"+name+"/stop?"+v.Encode(), nil) if err != nil { fmt.Printf("%s", err) } else { @@ -372,7 +371,7 @@ func CmdStop(args ...string) error { return nil } -func CmdRestart(args ...string) error { +func (cli *DockerCli) CmdRestart(args ...string) error { cmd := Subcmd("restart", "[OPTIONS] CONTAINER [CONTAINER...]", "Restart a running container") nSeconds := cmd.Int("t", 10, "wait t seconds before killing the container") if err := cmd.Parse(args); err != nil { @@ -387,7 +386,7 @@ func CmdRestart(args ...string) error { v.Set("t", strconv.Itoa(*nSeconds)) for _, name := range cmd.Args() { - _, _, err := call("POST", "/containers/"+name+"/restart?"+v.Encode(), nil) + _, _, err := cli.call("POST", "/containers/"+name+"/restart?"+v.Encode(), nil) if err != nil { fmt.Printf("%s", err) } else { @@ -397,7 +396,7 @@ func CmdRestart(args ...string) error { return nil } -func CmdStart(args ...string) error { +func (cli *DockerCli) CmdStart(args ...string) error { cmd := Subcmd("start", "CONTAINER [CONTAINER...]", "Restart a stopped container") if err := cmd.Parse(args); err != nil { return nil @@ -408,7 +407,7 @@ func CmdStart(args ...string) error { } for _, name := range args { - _, _, err := call("POST", "/containers/"+name+"/start", nil) + _, _, err := cli.call("POST", "/containers/"+name+"/start", nil) if err != nil { fmt.Printf("%s", err) } else { @@ -418,7 +417,7 @@ func CmdStart(args ...string) error { return nil } -func CmdInspect(args ...string) error { +func (cli *DockerCli) CmdInspect(args ...string) error { cmd := Subcmd("inspect", "CONTAINER|IMAGE", "Return low-level information on a container/image") if err := cmd.Parse(args); err != nil { return nil @@ -427,9 +426,9 @@ func CmdInspect(args ...string) error { cmd.Usage() return nil } - obj, _, err := call("GET", "/containers/"+cmd.Arg(0)+"/json", nil) + obj, _, err := cli.call("GET", "/containers/"+cmd.Arg(0)+"/json", nil) if err != nil { - obj, _, err = call("GET", "/images/"+cmd.Arg(0)+"/json", nil) + obj, _, err = cli.call("GET", "/images/"+cmd.Arg(0)+"/json", nil) if err != nil { return err } @@ -445,7 +444,7 @@ func CmdInspect(args ...string) error { return nil } -func CmdPort(args ...string) error { +func (cli *DockerCli) CmdPort(args ...string) error { cmd := Subcmd("port", "CONTAINER PRIVATE_PORT", "Lookup the public-facing port which is NAT-ed to PRIVATE_PORT") if err := cmd.Parse(args); err != nil { return nil @@ -455,7 +454,7 @@ func CmdPort(args ...string) error { return nil } - body, _, err := call("GET", "/containers/"+cmd.Arg(0)+"/json", nil) + body, _, err := cli.call("GET", "/containers/"+cmd.Arg(0)+"/json", nil) if err != nil { return err } @@ -474,7 +473,7 @@ func CmdPort(args ...string) error { } // 'docker rmi IMAGE' removes all images with the name IMAGE -func CmdRmi(args ...string) error { +func (cli *DockerCli) CmdRmi(args ...string) error { cmd := Subcmd("rmi", "IMAGE [IMAGE...]", "Remove an image") if err := cmd.Parse(args); err != nil { return nil @@ -485,7 +484,7 @@ func CmdRmi(args ...string) error { } for _, name := range cmd.Args() { - _, _, err := call("DELETE", "/images/"+name, nil) + _, _, err := cli.call("DELETE", "/images/"+name, nil) if err != nil { fmt.Printf("%s", err) } else { @@ -495,7 +494,7 @@ func CmdRmi(args ...string) error { return nil } -func CmdHistory(args ...string) error { +func (cli *DockerCli) CmdHistory(args ...string) error { cmd := Subcmd("history", "IMAGE", "Show the history of an image") if err := cmd.Parse(args); err != nil { return nil @@ -505,7 +504,7 @@ func CmdHistory(args ...string) error { return nil } - body, _, err := call("GET", "/images/"+cmd.Arg(0)+"/history", nil) + body, _, err := cli.call("GET", "/images/"+cmd.Arg(0)+"/history", nil) if err != nil { return err } @@ -519,13 +518,13 @@ func CmdHistory(args ...string) error { fmt.Fprintln(w, "ID\tCREATED\tCREATED BY") for _, out := range outs { - fmt.Fprintf(w, "%s\t%s ago\t%s\n", out.Id, HumanDuration(time.Now().Sub(time.Unix(out.Created, 0))), out.CreatedBy) + fmt.Fprintf(w, "%s\t%s ago\t%s\n", out.Id, utils.HumanDuration(time.Now().Sub(time.Unix(out.Created, 0))), out.CreatedBy) } w.Flush() return nil } -func CmdRm(args ...string) error { +func (cli *DockerCli) CmdRm(args ...string) error { cmd := Subcmd("rm", "[OPTIONS] CONTAINER [CONTAINER...]", "Remove a container") v := cmd.Bool("v", false, "Remove the volumes associated to the container") if err := cmd.Parse(args); err != nil { @@ -540,7 +539,7 @@ func CmdRm(args ...string) error { val.Set("v", "1") } for _, name := range cmd.Args() { - _, _, err := call("DELETE", "/containers/"+name+"?"+val.Encode(), nil) + _, _, err := cli.call("DELETE", "/containers/"+name+"?"+val.Encode(), nil) if err != nil { fmt.Printf("%s", err) } else { @@ -551,7 +550,7 @@ func CmdRm(args ...string) error { } // 'docker kill NAME' kills a running container -func CmdKill(args ...string) error { +func (cli *DockerCli) CmdKill(args ...string) error { cmd := Subcmd("kill", "CONTAINER [CONTAINER...]", "Kill a running container") if err := cmd.Parse(args); err != nil { return nil @@ -562,7 +561,7 @@ func CmdKill(args ...string) error { } for _, name := range args { - _, _, err := call("POST", "/containers/"+name+"/kill", nil) + _, _, err := cli.call("POST", "/containers/"+name+"/kill", nil) if err != nil { fmt.Printf("%s", err) } else { @@ -572,7 +571,7 @@ func CmdKill(args ...string) error { return nil } -func CmdImport(args ...string) error { +func (cli *DockerCli) CmdImport(args ...string) error { cmd := Subcmd("import", "URL|- [REPOSITORY [TAG]]", "Create a new filesystem image from the contents of a tarball") if err := cmd.Parse(args); err != nil { @@ -588,14 +587,14 @@ func CmdImport(args ...string) error { v.Set("tag", tag) v.Set("fromSrc", src) - err := hijack("POST", "/images/create?"+v.Encode(), false) + err := cli.stream("POST", "/images/create?"+v.Encode(), os.Stdin, os.Stdout) if err != nil { return err } return nil } -func CmdPush(args ...string) error { +func (cli *DockerCli) CmdPush(args ...string) error { cmd := Subcmd("push", "[OPTION] NAME", "Push an image or a repository to the registry") registry := cmd.String("registry", "", "Registry host to push the image to") if err := cmd.Parse(args); err != nil { @@ -608,7 +607,7 @@ func CmdPush(args ...string) error { return nil } - body, _, err := call("GET", "/auth", nil) + body, _, err := cli.call("GET", "/auth", nil) if err != nil { return err } @@ -621,11 +620,11 @@ func CmdPush(args ...string) error { // If the login failed AND we're using the index, abort if *registry == "" && out.Username == "" { - if err := CmdLogin(args...); err != nil { + if err := cli.CmdLogin(args...); err != nil { return err } - body, _, err = call("GET", "/auth", nil) + body, _, err = cli.call("GET", "/auth", nil) if err != nil { return err } @@ -645,13 +644,13 @@ func CmdPush(args ...string) error { v := url.Values{} v.Set("registry", *registry) - if err := hijack("POST", "/images/"+name+"/push?"+v.Encode(), false); err != nil { + if err := cli.stream("POST", "/images/"+name+"/push?"+v.Encode(), nil, os.Stdout); err != nil { return err } return nil } -func CmdPull(args ...string) error { +func (cli *DockerCli) CmdPull(args ...string) error { cmd := Subcmd("pull", "NAME", "Pull an image or a repository from the registry") tag := cmd.String("t", "", "Download tagged image in repository") registry := cmd.String("registry", "", "Registry to download from. Necessary if image is pulled by ID") @@ -676,17 +675,18 @@ func CmdPull(args ...string) error { v.Set("tag", *tag) v.Set("registry", *registry) - if err := hijack("POST", "/images/create?"+v.Encode(), false); err != nil { + if err := cli.stream("POST", "/images/create?"+v.Encode(), nil, os.Stdout); err != nil { return err } return nil } -func CmdImages(args ...string) error { +func (cli *DockerCli) CmdImages(args ...string) error { cmd := Subcmd("images", "[OPTIONS] [NAME]", "List images") quiet := cmd.Bool("q", false, "only show numeric IDs") all := cmd.Bool("a", false, "show all images") + noTrunc := cmd.Bool("notrunc", false, "Don't truncate output") flViz := cmd.Bool("viz", false, "output graph in graphviz format") if err := cmd.Parse(args); err != nil { @@ -698,7 +698,7 @@ func CmdImages(args ...string) error { } if *flViz { - body, _, err := call("GET", "/images/viz", false) + body, _, err := cli.call("GET", "/images/viz", false) if err != nil { return err } @@ -708,14 +708,11 @@ func CmdImages(args ...string) error { if cmd.NArg() == 1 { v.Set("filter", cmd.Arg(0)) } - if *quiet { - v.Set("only_ids", "1") - } if *all { v.Set("all", "1") } - body, _, err := call("GET", "/images/json?"+v.Encode(), nil) + body, _, err := cli.call("GET", "/images/json?"+v.Encode(), nil) if err != nil { return err } @@ -732,10 +729,32 @@ func CmdImages(args ...string) error { } for _, out := range outs { + if out.Repository == "" { + out.Repository = "" + } + if out.Tag == "" { + out.Tag = "" + } + if !*quiet { - fmt.Fprintf(w, "%s\t%s\t%s\t%s ago\t%s (virtual %s)\n", out.Repository, out.Tag, out.Id, HumanDuration(time.Now().Sub(time.Unix(out.Created, 0))), HumanSize(out.Size), HumanSize(out.ParentSize)) + fmt.Fprintf(w, "%s\t%s\t", out.Repository, out.Tag) + if *noTrunc { + fmt.Fprintf(w, "%s\t", out.Id) + } else { + fmt.Fprintf(w, "%s\t", utils.TruncateId(out.Id)) + } + fmt.Fprintf(w, "%s ago\t", utils.HumanDuration(time.Now().Sub(time.Unix(out.Created, 0)))) + if out.ParentSize > 0 { + fmt.Fprintf(w, "%s (virtual %s)\n", utils.HumanSize(out.Size), utils.HumanSize(out.ParentSize)) + } else { + fmt.Fprintf(w, "%s\n", utils.HumanSize(out.Size)) + } } else { - fmt.Fprintln(w, out.Id) + if *noTrunc { + fmt.Fprintln(w, out.Id) + } else { + fmt.Fprintln(w, utils.TruncateId(out.Id)) + } } } @@ -746,7 +765,7 @@ func CmdImages(args ...string) error { return nil } -func CmdPs(args ...string) error { +func (cli *DockerCli) CmdPs(args ...string) error { cmd := Subcmd("ps", "[OPTIONS]", "List containers") quiet := cmd.Bool("q", false, "Only display numeric IDs") all := cmd.Bool("a", false, "Show all containers. Only running containers are shown by default.") @@ -763,15 +782,9 @@ func CmdPs(args ...string) error { if *last == -1 && *nLatest { *last = 1 } - if *quiet { - v.Set("only_ids", "1") - } if *all { v.Set("all", "1") } - if *noTrunc { - v.Set("trunc_cmd", "0") - } if *last != -1 { v.Set("limit", strconv.Itoa(*last)) } @@ -782,7 +795,7 @@ func CmdPs(args ...string) error { v.Set("before", *before) } - body, _, err := call("GET", "/containers/ps?"+v.Encode(), nil) + body, _, err := cli.call("GET", "/containers/ps?"+v.Encode(), nil) if err != nil { return err } @@ -799,14 +812,22 @@ func CmdPs(args ...string) error { for _, out := range outs { if !*quiet { - fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s ago\t%s\t", out.Id, out.Image, out.Command, out.Status, HumanDuration(time.Now().Sub(time.Unix(out.Created, 0))), out.Ports) - if out.SizeRootFs > 0 { - fmt.Fprintf(w, "%s (virtual %s)\n", HumanSize(out.SizeRw), HumanSize(out.SizeRootFs)) + if *noTrunc { + fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s ago\t%s\t", out.Id, out.Image, out.Command, out.Status, utils.HumanDuration(time.Now().Sub(time.Unix(out.Created, 0))), out.Ports) } else { - fmt.Fprintf(w, "%s\n", HumanSize(out.SizeRw)) + fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s ago\t%s\t", utils.TruncateId(out.Id), out.Image, utils.Trunc(out.Command, 20), out.Status, utils.HumanDuration(time.Now().Sub(time.Unix(out.Created, 0))), out.Ports) + } + if out.SizeRootFs > 0 { + fmt.Fprintf(w, "%s (virtual %s)\n", utils.HumanSize(out.SizeRw), utils.HumanSize(out.SizeRootFs)) + } else { + fmt.Fprintf(w, "%s\n", utils.HumanSize(out.SizeRw)) } } else { - fmt.Fprintln(w, out.Id) + if *noTrunc { + fmt.Fprintln(w, out.Id) + } else { + fmt.Fprintln(w, utils.TruncateId(out.Id)) + } } } @@ -816,7 +837,7 @@ func CmdPs(args ...string) error { return nil } -func CmdCommit(args ...string) error { +func (cli *DockerCli) CmdCommit(args ...string) error { cmd := Subcmd("commit", "[OPTIONS] CONTAINER [REPOSITORY [TAG]]", "Create a new image from a container's changes") flComment := cmd.String("m", "", "Commit message") flAuthor := cmd.String("author", "", "Author (eg. \"John Hannibal Smith \"") @@ -843,7 +864,7 @@ func CmdCommit(args ...string) error { return err } } - body, _, err := call("POST", "/commit?"+v.Encode(), config) + body, _, err := cli.call("POST", "/commit?"+v.Encode(), config) if err != nil { return err } @@ -858,7 +879,7 @@ func CmdCommit(args ...string) error { return nil } -func CmdExport(args ...string) error { +func (cli *DockerCli) CmdExport(args ...string) error { cmd := Subcmd("export", "CONTAINER", "Export the contents of a filesystem as a tar archive") if err := cmd.Parse(args); err != nil { return nil @@ -869,13 +890,13 @@ func CmdExport(args ...string) error { return nil } - if err := stream("GET", "/containers/"+cmd.Arg(0)+"/export"); err != nil { + if err := cli.stream("GET", "/containers/"+cmd.Arg(0)+"/export", nil, os.Stdout); err != nil { return err } return nil } -func CmdDiff(args ...string) error { +func (cli *DockerCli) CmdDiff(args ...string) error { cmd := Subcmd("diff", "CONTAINER", "Inspect changes on a container's filesystem") if err := cmd.Parse(args); err != nil { return nil @@ -885,7 +906,7 @@ func CmdDiff(args ...string) error { return nil } - body, _, err := call("GET", "/containers/"+cmd.Arg(0)+"/changes", nil) + body, _, err := cli.call("GET", "/containers/"+cmd.Arg(0)+"/changes", nil) if err != nil { return err } @@ -901,7 +922,7 @@ func CmdDiff(args ...string) error { return nil } -func CmdLogs(args ...string) error { +func (cli *DockerCli) CmdLogs(args ...string) error { cmd := Subcmd("logs", "CONTAINER", "Fetch the logs of a container") if err := cmd.Parse(args); err != nil { return nil @@ -916,13 +937,13 @@ func CmdLogs(args ...string) error { v.Set("stdout", "1") v.Set("stderr", "1") - if err := hijack("POST", "/containers/"+cmd.Arg(0)+"/attach?"+v.Encode(), false); err != nil { + if err := cli.hijack("POST", "/containers/"+cmd.Arg(0)+"/attach?"+v.Encode(), false); err != nil { return err } return nil } -func CmdAttach(args ...string) error { +func (cli *DockerCli) CmdAttach(args ...string) error { cmd := Subcmd("attach", "CONTAINER", "Attach to a running container") if err := cmd.Parse(args); err != nil { return nil @@ -932,7 +953,7 @@ func CmdAttach(args ...string) error { return nil } - body, _, err := call("GET", "/containers/"+cmd.Arg(0)+"/json", nil) + body, _, err := cli.call("GET", "/containers/"+cmd.Arg(0)+"/json", nil) if err != nil { return err } @@ -949,13 +970,13 @@ func CmdAttach(args ...string) error { v.Set("stderr", "1") v.Set("stdin", "1") - if err := hijack("POST", "/containers/"+cmd.Arg(0)+"/attach?"+v.Encode(), container.Config.Tty); err != nil { + if err := cli.hijack("POST", "/containers/"+cmd.Arg(0)+"/attach?"+v.Encode(), container.Config.Tty); err != nil { return err } return nil } -func CmdSearch(args ...string) error { +func (cli *DockerCli) CmdSearch(args ...string) error { cmd := Subcmd("search", "NAME", "Search the docker index for images") if err := cmd.Parse(args); err != nil { return nil @@ -967,7 +988,7 @@ func CmdSearch(args ...string) error { v := url.Values{} v.Set("term", cmd.Arg(0)) - body, _, err := call("GET", "/images/search?"+v.Encode(), nil) + body, _, err := cli.call("GET", "/images/search?"+v.Encode(), nil) if err != nil { return err } @@ -1048,7 +1069,7 @@ func (opts PathOpts) Set(val string) error { return nil } -func CmdTag(args ...string) error { +func (cli *DockerCli) CmdTag(args ...string) error { cmd := Subcmd("tag", "[OPTIONS] IMAGE REPOSITORY [TAG]", "Tag an image into a repository") force := cmd.Bool("f", false, "Force") if err := cmd.Parse(args); err != nil { @@ -1069,13 +1090,13 @@ func CmdTag(args ...string) error { v.Set("force", "1") } - if _, _, err := call("POST", "/images/"+cmd.Arg(0)+"/tag?"+v.Encode(), nil); err != nil { + if _, _, err := cli.call("POST", "/images/"+cmd.Arg(0)+"/tag?"+v.Encode(), nil); err != nil { return err } return nil } -func CmdRun(args ...string) error { +func (cli *DockerCli) CmdRun(args ...string) error { config, cmd, err := ParseRun(args, nil) if err != nil { return err @@ -1086,16 +1107,16 @@ func CmdRun(args ...string) error { } //create the container - body, statusCode, err := call("POST", "/containers/create", config) + body, statusCode, err := cli.call("POST", "/containers/create", config) //if image not found try to pull it if statusCode == 404 { v := url.Values{} v.Set("fromImage", config.Image) - err = hijack("POST", "/images/create?"+v.Encode(), false) + err = cli.stream("POST", "/images/create?"+v.Encode(), nil, os.Stderr) if err != nil { return err } - body, _, err = call("POST", "/containers/create", config) + body, _, err = cli.call("POST", "/containers/create", config) if err != nil { return err } @@ -1130,13 +1151,13 @@ func CmdRun(args ...string) error { } //start the container - _, _, err = call("POST", "/containers/"+out.Id+"/start", nil) + _, _, err = cli.call("POST", "/containers/"+out.Id+"/start", nil) if err != nil { return err } if config.AttachStdin || config.AttachStdout || config.AttachStderr { - if err := hijack("POST", "/containers/"+out.Id+"/attach?"+v.Encode(), config.Tty); err != nil { + if err := cli.hijack("POST", "/containers/"+out.Id+"/attach?"+v.Encode(), config.Tty); err != nil { return err } } @@ -1146,7 +1167,7 @@ func CmdRun(args ...string) error { return nil } -func call(method, path string, data interface{}) ([]byte, int, error) { +func (cli *DockerCli) call(method, path string, data interface{}) ([]byte, int, error) { var params io.Reader if data != nil { buf, err := json.Marshal(data) @@ -1156,7 +1177,7 @@ func call(method, path string, data interface{}) ([]byte, int, error) { params = bytes.NewBuffer(buf) } - req, err := http.NewRequest(method, "http://0.0.0.0:4243"+path, params) + req, err := http.NewRequest(method, fmt.Sprintf("http://%s:%d", cli.host, cli.port)+path, params) if err != nil { return nil, -1, err } @@ -1184,8 +1205,11 @@ func call(method, path string, data interface{}) ([]byte, int, error) { return body, resp.StatusCode, nil } -func stream(method, path string) error { - req, err := http.NewRequest(method, "http://0.0.0.0:4243"+path, nil) +func (cli *DockerCli) stream(method, path string, in io.Reader, out io.Writer) error { + if (method == "POST" || method == "PUT") && in == nil { + in = bytes.NewReader([]byte{}) + } + req, err := http.NewRequest(method, fmt.Sprintf("http://%s:%d%s", cli.host, cli.port, path), in) if err != nil { return err } @@ -1201,19 +1225,27 @@ func stream(method, path string) error { return err } defer resp.Body.Close() - if _, err := io.Copy(os.Stdout, resp.Body); err != nil { + if resp.StatusCode < 200 || resp.StatusCode >= 400 { + body, err := ioutil.ReadAll(resp.Body) + if err != nil { + return err + } + return fmt.Errorf("error: %s", body) + } + + if _, err := io.Copy(out, resp.Body); err != nil { return err } return nil } -func hijack(method, path string, setRawTerminal bool) error { +func (cli *DockerCli) hijack(method, path string, setRawTerminal bool) error { req, err := http.NewRequest(method, path, nil) if err != nil { return err } req.Header.Set("Content-Type", "plain/text") - dial, err := net.Dial("tcp", "0.0.0.0:4243") + dial, err := net.Dial("tcp", fmt.Sprintf("%s:%d", cli.host, cli.port)) if err != nil { return err } @@ -1224,20 +1256,20 @@ func hijack(method, path string, setRawTerminal bool) error { rwc, br := clientconn.Hijack() defer rwc.Close() - receiveStdout := Go(func() error { + receiveStdout := utils.Go(func() error { _, err := io.Copy(os.Stdout, br) return err }) if setRawTerminal && term.IsTerminal(int(os.Stdin.Fd())) && os.Getenv("NORAW") == "" { - if oldState, err := SetRawTerminal(); err != nil { + if oldState, err := term.SetRawTerminal(); err != nil { return err } else { - defer RestoreTerminal(oldState) + defer term.RestoreTerminal(oldState) } } - sendStdin := Go(func() error { + sendStdin := utils.Go(func() error { _, err := io.Copy(rwc, os.Stdin) if err := rwc.(*net.TCPConn).CloseWrite(); err != nil { fmt.Fprintf(os.Stderr, "Couldn't send EOF: %s\n", err) @@ -1266,3 +1298,12 @@ func Subcmd(name, signature, description string) *flag.FlagSet { } return flags } + +func NewDockerCli(host string, port int) *DockerCli { + return &DockerCli{host, port} +} + +type DockerCli struct { + host string + port int +} diff --git a/components/engine/commands_test.go b/components/engine/commands_test.go index 80f31e4f76..05ece80dac 100644 --- a/components/engine/commands_test.go +++ b/components/engine/commands_test.go @@ -413,6 +413,7 @@ func TestAttachDisconnect(t *testing.T) { container, err := NewBuilder(runtime).Create( &Config{ Image: GetTestImage(runtime).Id, + CpuShares: 1000, Memory: 33554432, Cmd: []string{"/bin/cat"}, OpenStdin: true, diff --git a/components/engine/container.go b/components/engine/container.go index 8ccdfb2a43..b6d9ae5d31 100644 --- a/components/engine/container.go +++ b/components/engine/container.go @@ -4,6 +4,7 @@ import ( "encoding/json" "flag" "fmt" + "github.com/dotcloud/docker/utils" "github.com/kr/pty" "io" "io/ioutil" @@ -40,8 +41,8 @@ type Container struct { ResolvConfPath string cmd *exec.Cmd - stdout *writeBroadcaster - stderr *writeBroadcaster + stdout *utils.WriteBroadcaster + stderr *utils.WriteBroadcaster stdin io.ReadCloser stdinPipe io.WriteCloser ptyMaster io.Closer @@ -57,6 +58,7 @@ type Config struct { User string Memory int64 // Memory limit (in bytes) MemorySwap int64 // Total memory usage (memory + swap); set `-1' to disable swap + CpuShares int64 // CPU shares (relative weight vs. other containers) AttachStdin bool AttachStdout bool AttachStderr bool @@ -92,6 +94,8 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet *flMemory = 0 } + flCpuShares := cmd.Int64("c", 0, "CPU shares (relative weight)") + var flPorts ListOpts cmd.Var(&flPorts, "p", "Expose a container's port to the host (use 'docker port' to see the actual mapping)") @@ -138,6 +142,7 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet Tty: *flTty, OpenStdin: *flStdin, Memory: *flMemory, + CpuShares: *flCpuShares, AttachStdin: flAttach.Get("stdin"), AttachStdout: flAttach.Get("stdout"), AttachStderr: flAttach.Get("stderr"), @@ -248,9 +253,9 @@ func (container *Container) startPty() error { // Copy the PTYs to our broadcasters go func() { defer container.stdout.CloseWriters() - Debugf("[startPty] Begin of stdout pipe") + utils.Debugf("[startPty] Begin of stdout pipe") io.Copy(container.stdout, ptyMaster) - Debugf("[startPty] End of stdout pipe") + utils.Debugf("[startPty] End of stdout pipe") }() // stdin @@ -259,9 +264,9 @@ func (container *Container) startPty() error { container.cmd.SysProcAttr = &syscall.SysProcAttr{Setctty: true, Setsid: true} go func() { defer container.stdin.Close() - Debugf("[startPty] Begin of stdin pipe") + utils.Debugf("[startPty] Begin of stdin pipe") io.Copy(ptyMaster, container.stdin) - Debugf("[startPty] End of stdin pipe") + utils.Debugf("[startPty] End of stdin pipe") }() } if err := container.cmd.Start(); err != nil { @@ -281,9 +286,9 @@ func (container *Container) start() error { } go func() { defer stdin.Close() - Debugf("Begin of stdin pipe [start]") + utils.Debugf("Begin of stdin pipe [start]") io.Copy(stdin, container.stdin) - Debugf("End of stdin pipe [start]") + utils.Debugf("End of stdin pipe [start]") }() } return container.cmd.Start() @@ -300,8 +305,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s errors <- err } else { go func() { - Debugf("[start] attach stdin\n") - defer Debugf("[end] attach stdin\n") + utils.Debugf("[start] attach stdin\n") + defer utils.Debugf("[end] attach stdin\n") // No matter what, when stdin is closed (io.Copy unblock), close stdout and stderr if cStdout != nil { defer cStdout.Close() @@ -313,12 +318,12 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s defer cStdin.Close() } if container.Config.Tty { - _, err = CopyEscapable(cStdin, stdin) + _, err = utils.CopyEscapable(cStdin, stdin) } else { _, err = io.Copy(cStdin, stdin) } if err != nil { - Debugf("[error] attach stdin: %s\n", err) + utils.Debugf("[error] attach stdin: %s\n", err) } // Discard error, expecting pipe error errors <- nil @@ -332,8 +337,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s } else { cStdout = p go func() { - Debugf("[start] attach stdout\n") - defer Debugf("[end] attach stdout\n") + utils.Debugf("[start] attach stdout\n") + defer utils.Debugf("[end] attach stdout\n") // If we are in StdinOnce mode, then close stdin if container.Config.StdinOnce { if stdin != nil { @@ -345,7 +350,7 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s } _, err := io.Copy(stdout, cStdout) if err != nil { - Debugf("[error] attach stdout: %s\n", err) + utils.Debugf("[error] attach stdout: %s\n", err) } errors <- err }() @@ -358,8 +363,8 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s } else { cStderr = p go func() { - Debugf("[start] attach stderr\n") - defer Debugf("[end] attach stderr\n") + utils.Debugf("[start] attach stderr\n") + defer utils.Debugf("[end] attach stderr\n") // If we are in StdinOnce mode, then close stdin if container.Config.StdinOnce { if stdin != nil { @@ -371,13 +376,13 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s } _, err := io.Copy(stderr, cStderr) if err != nil { - Debugf("[error] attach stderr: %s\n", err) + utils.Debugf("[error] attach stderr: %s\n", err) } errors <- err }() } } - return Go(func() error { + return utils.Go(func() error { if cStdout != nil { defer cStdout.Close() } @@ -387,14 +392,14 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s // FIXME: how do clean up the stdin goroutine without the unwanted side effect // of closing the passed stdin? Add an intermediary io.Pipe? for i := 0; i < nJobs; i += 1 { - Debugf("Waiting for job %d/%d\n", i+1, nJobs) + utils.Debugf("Waiting for job %d/%d\n", i+1, nJobs) if err := <-errors; err != nil { - Debugf("Job %d returned error %s. Aborting all jobs\n", i+1, err) + utils.Debugf("Job %d returned error %s. Aborting all jobs\n", i+1, err) return err } - Debugf("Job %d completed successfully\n", i+1) + utils.Debugf("Job %d completed successfully\n", i+1) } - Debugf("All jobs completed successfully\n") + utils.Debugf("All jobs completed successfully\n") return nil }) } @@ -552,13 +557,13 @@ func (container *Container) StdinPipe() (io.WriteCloser, error) { func (container *Container) StdoutPipe() (io.ReadCloser, error) { reader, writer := io.Pipe() container.stdout.AddWriter(writer) - return newBufReader(reader), nil + return utils.NewBufReader(reader), nil } func (container *Container) StderrPipe() (io.ReadCloser, error) { reader, writer := io.Pipe() container.stderr.AddWriter(writer) - return newBufReader(reader), nil + return utils.NewBufReader(reader), nil } func (container *Container) allocateNetwork() error { @@ -606,20 +611,20 @@ func (container *Container) waitLxc() error { func (container *Container) monitor() { // Wait for the program to exit - Debugf("Waiting for process") + utils.Debugf("Waiting for process") // If the command does not exists, try to wait via lxc if container.cmd == nil { if err := container.waitLxc(); err != nil { - Debugf("%s: Process: %s", container.Id, err) + utils.Debugf("%s: Process: %s", container.Id, err) } } else { if err := container.cmd.Wait(); err != nil { // Discard the error as any signals or non 0 returns will generate an error - Debugf("%s: Process: %s", container.Id, err) + utils.Debugf("%s: Process: %s", container.Id, err) } } - Debugf("Process finished") + utils.Debugf("Process finished") var exitCode int = -1 if container.cmd != nil { @@ -630,19 +635,19 @@ func (container *Container) monitor() { container.releaseNetwork() if container.Config.OpenStdin { if err := container.stdin.Close(); err != nil { - Debugf("%s: Error close stdin: %s", container.Id, err) + utils.Debugf("%s: Error close stdin: %s", container.Id, err) } } if err := container.stdout.CloseWriters(); err != nil { - Debugf("%s: Error close stdout: %s", container.Id, err) + utils.Debugf("%s: Error close stdout: %s", container.Id, err) } if err := container.stderr.CloseWriters(); err != nil { - Debugf("%s: Error close stderr: %s", container.Id, err) + utils.Debugf("%s: Error close stderr: %s", container.Id, err) } if container.ptyMaster != nil { if err := container.ptyMaster.Close(); err != nil { - Debugf("%s: Error closing Pty master: %s", container.Id, err) + utils.Debugf("%s: Error closing Pty master: %s", container.Id, err) } } @@ -759,7 +764,7 @@ func (container *Container) RwChecksum() (string, error) { if err != nil { return "", err } - return HashData(rwData) + return utils.HashData(rwData) } func (container *Container) Export() (Archive, error) { @@ -830,7 +835,7 @@ func (container *Container) Unmount() error { // In case of a collision a lookup with Runtime.Get() will fail, and the caller // will need to use a langer prefix, or the full-length container Id. func (container *Container) ShortId() string { - return TruncateId(container.Id) + return utils.TruncateId(container.Id) } func (container *Container) logPath(name string) string { diff --git a/components/engine/container_test.go b/components/engine/container_test.go index 9770e800ff..3ed1763a3e 100644 --- a/components/engine/container_test.go +++ b/components/engine/container_test.go @@ -390,6 +390,7 @@ func TestStart(t *testing.T) { &Config{ Image: GetTestImage(runtime).Id, Memory: 33554432, + CpuShares: 1000, Cmd: []string{"/bin/cat"}, OpenStdin: true, }, @@ -1063,12 +1064,17 @@ func TestLXCConfig(t *testing.T) { memMin := 33554432 memMax := 536870912 mem := memMin + rand.Intn(memMax-memMin) + // CPU shares as well + cpuMin := 100 + cpuMax := 10000 + cpu := cpuMin + rand.Intn(cpuMax-cpuMin) container, err := NewBuilder(runtime).Create(&Config{ Image: GetTestImage(runtime).Id, Cmd: []string{"/bin/true"}, - Hostname: "foobar", - Memory: int64(mem), + Hostname: "foobar", + Memory: int64(mem), + CpuShares: int64(cpu), }, ) if err != nil { diff --git a/components/engine/docker/docker.go b/components/engine/docker/docker.go index 778326a810..c8c1a65603 100644 --- a/components/engine/docker/docker.go +++ b/components/engine/docker/docker.go @@ -4,6 +4,7 @@ import ( "flag" "fmt" "github.com/dotcloud/docker" + "github.com/dotcloud/docker/utils" "io/ioutil" "log" "os" @@ -17,7 +18,7 @@ var ( ) func main() { - if docker.SelfPath() == "/sbin/init" { + if utils.SelfPath() == "/sbin/init" { // Running in init mode docker.SysInit() return diff --git a/components/engine/docs/Makefile b/components/engine/docs/Makefile index 9298123f7f..26168b6f38 100644 --- a/components/engine/docs/Makefile +++ b/components/engine/docs/Makefile @@ -44,7 +44,7 @@ clean: -rm -rf $(BUILDDIR)/* docs: - -rm -rf $(BUILDDIR)/* + #-rm -rf $(BUILDDIR)/* $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The documentation pages are now in $(BUILDDIR)/html." @@ -59,18 +59,13 @@ site: connect: @echo connecting dotcloud to www.docker.io website, make sure to use user 1 @cd _build/website/ ; \ - dotcloud list ; \ - dotcloud connect dockerwebsite + dotcloud connect dockerwebsite ; + dotcloud list push: @cd _build/website/ ; \ dotcloud push -github-deploy: docs - rm -fr github-deploy - git clone ssh://git@github.com/dotcloud/docker github-deploy - cd github-deploy && git checkout -f gh-pages && git rm -r * && rsync -avH ../_build/html/ ./ && touch .nojekyll && echo "docker.io" > CNAME && git add * && git commit -m "Updating docs" - $(VERSIONS): @echo "Hello world" diff --git a/components/engine/docs/sources/.nojekyll b/components/engine/docs/sources/.nojekyll deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/components/engine/docs/sources/CNAME b/components/engine/docs/sources/CNAME deleted file mode 100644 index 243e482261..0000000000 --- a/components/engine/docs/sources/CNAME +++ /dev/null @@ -1 +0,0 @@ -docker.io diff --git a/components/engine/docs/sources/remote-api/api.rst b/components/engine/docs/sources/api/docker_remote_api.rst similarity index 93% rename from components/engine/docs/sources/remote-api/api.rst rename to components/engine/docs/sources/api/docker_remote_api.rst index a6f0662644..2b1aad0e84 100644 --- a/components/engine/docs/sources/remote-api/api.rst +++ b/components/engine/docs/sources/api/docker_remote_api.rst @@ -9,7 +9,7 @@ Docker Remote API - The Remote API is replacing rcli - Default port in the docker deamon is 4243 -- The API tends to be REST, but for some complex commands, like attach or pull, the HTTP connection in hijacked to transport stdout stdin and stderr +- The API tends to be REST, but for some complex commands, like attach or pull, the HTTP connection is hijacked to transport stdout stdin and stderr 2. Endpoints ============ @@ -28,7 +28,7 @@ List containers .. sourcecode:: http - GET /containers/ps?trunc_cmd=0&all=1&only_ids=0&before=8dfafdbc3a40 HTTP/1.1 + GET /containers/ps?all=1&before=8dfafdbc3a40 HTTP/1.1 **Example response**: @@ -68,13 +68,12 @@ List containers } ] - :query only_ids: 1 or 0, Only display numeric IDs. Default 0 - :query all: 1 or 0, Show all containers. Only running containers are shown by default - :query trunc_cmd: 1 or 0, Truncate output. Output is truncated by default + :query all: 1/True/true or 0/False/false, Show all containers. Only running containers are shown by default :query limit: Show ``limit`` last created containers, include non-running ones. :query since: Show only containers created since Id, include non-running ones. :query before: Show only containers created before Id, include non-running ones. :statuscode 200: no error + :statuscode 400: bad parameter :statuscode 500: server error @@ -391,12 +390,13 @@ Attach to a container {{ STREAM }} - :query logs: 1 or 0, return logs. Default 0 - :query stream: 1 or 0, return stream. Default 0 - :query stdin: 1 or 0, if stream=1, attach to stdin. Default 0 - :query stdout: 1 or 0, if logs=1, return stdout log, if stream=1, attach to stdout. Default 0 - :query stderr: 1 or 0, if logs=1, return stderr log, if stream=1, attach to stderr. Default 0 + :query logs: 1/True/true or 0/False/false, return logs. Default false + :query stream: 1/True/true or 0/False/false, return stream. Default false + :query stdin: 1/True/true or 0/False/false, if stream=true, attach to stdin. Default false + :query stdout: 1/True/true or 0/False/false, if logs=true, return stdout log, if stream=true, attach to stdout. Default false + :query stderr: 1/True/true or 0/False/false, if logs=true, return stderr log, if stream=true, attach to stderr. Default false :statuscode 200: no error + :statuscode 400: bad parameter :statuscode 404: no such container :statuscode 500: server error @@ -447,8 +447,9 @@ Remove a container HTTP/1.1 204 OK - :query v: 1 or 0, Remove the volumes associated to the container. Default 0 + :query v: 1/True/true or 0/False/false, Remove the volumes associated to the container. Default false :statuscode 204: no error + :statuscode 400: bad parameter :statuscode 404: no such container :statuscode 500: server error @@ -467,7 +468,7 @@ List Images .. sourcecode:: http - GET /images/json?all=0&only_ids=0 HTTP/1.1 + GET /images/json?all=0 HTTP/1.1 **Example response**: @@ -523,9 +524,9 @@ List Images base [style=invisible] } - :query only_ids: 1 or 0, Only display numeric IDs. Default 0 - :query all: 1 or 0, Show all containers. Only running containers are shown by default + :query all: 1/True/true or 0/False/false, Show all containers. Only running containers are shown by default :statuscode 200: no error + :statuscode 400: bad parameter :statuscode 500: server error @@ -723,8 +724,9 @@ Tag an image into a repository HTTP/1.1 200 OK :query repo: The repository to tag in - :query force: 1 or 0, default 0 + :query force: 1/True/true or 0/False/false, default false :statuscode 200: no error + :statuscode 400: bad parameter :statuscode 404: no such image :statuscode 500: server error diff --git a/components/engine/docs/sources/api/index.rst b/components/engine/docs/sources/api/index.rst new file mode 100644 index 0000000000..8c118bcbc0 --- /dev/null +++ b/components/engine/docs/sources/api/index.rst @@ -0,0 +1,17 @@ +:title: docker documentation +:description: docker documentation +:keywords: + +API's +============= + +This following : + +.. toctree:: + :maxdepth: 3 + + registry_api + index_search_api + docker_remote_api + + diff --git a/components/engine/docs/sources/index/search.rst b/components/engine/docs/sources/api/index_search_api.rst similarity index 87% rename from components/engine/docs/sources/index/search.rst rename to components/engine/docs/sources/api/index_search_api.rst index 498295fa2b..e2f8edc492 100644 --- a/components/engine/docs/sources/index/search.rst +++ b/components/engine/docs/sources/api/index_search_api.rst @@ -1,3 +1,8 @@ +:title: Docker Index documentation +:description: Documentation for docker Index +:keywords: docker, index, api + + ======================= Docker Index Search API ======================= @@ -32,7 +37,7 @@ Search {"name": "base2", "description": "A base ubuntu64 image..."}, ] } - + :query q: what you want to search for :statuscode 200: no error :statuscode 500: server error \ No newline at end of file diff --git a/components/engine/docs/sources/registry/api.rst b/components/engine/docs/sources/api/registry_api.rst similarity index 96% rename from components/engine/docs/sources/registry/api.rst rename to components/engine/docs/sources/api/registry_api.rst index ec2591af4c..e299584e17 100644 --- a/components/engine/docs/sources/registry/api.rst +++ b/components/engine/docs/sources/api/registry_api.rst @@ -1,3 +1,8 @@ +:title: docker Registry documentation +:description: Documentation for docker Registry and Registry API +:keywords: docker, registry, api, index + + =================== Docker Registry API =================== @@ -44,7 +49,7 @@ We expect that there will be multiple registries out there. To help to grasp the .. note:: - Mirror registries and private registries which do not use the Index don’t even need to run the registry code. They can be implemented by any kind of transport implementing HTTP GET and PUT. Read-only registries can be powered by a simple static HTTP server. + Mirror registries and private registries which do not use the Index don’t even need to run the registry code. They can be implemented by any kind of transport implementing HTTP GET and PUT. Read-only registries can be powered by a simple static HTTP server. .. note:: @@ -80,7 +85,7 @@ On top of being a runtime for LXC, Docker is the Registry client. It supports: 5. Index returns true/false lettings registry know if it should proceed or error out 6. Get the payload for all layers -It’s possible to run docker pull https:///repositories/samalba/busybox. In this case, docker bypasses the Index. However the security is not guaranteed (in case Registry A is corrupted) because there won’t be any checksum checks. +It’s possible to run docker pull \https:///repositories/samalba/busybox. In this case, docker bypasses the Index. However the security is not guaranteed (in case Registry A is corrupted) because there won’t be any checksum checks. Currently registry redirects to s3 urls for downloads, going forward all downloads need to be streamed through the registry. The Registry will then abstract the calls to S3 by a top-level class which implements sub-classes for S3 and local storage. @@ -107,7 +112,7 @@ API (pulling repository foo/bar): Jsonified checksums (see part 4.4.1) 3. (Docker -> Registry) GET /v1/repositories/foo/bar/tags/latest - **Headers**: + **Headers**: Authorization: Token signature=123abc,repository=”foo/bar”,access=write 4. (Registry -> Index) GET /v1/repositories/foo/bar/images @@ -121,10 +126,10 @@ API (pulling repository foo/bar): **Action**: ( Lookup token see if they have access to pull.) - If good: + If good: HTTP 200 OK Index will invalidate the token - If bad: + If bad: HTTP 401 Unauthorized 5. (Docker -> Registry) GET /v1/images/928374982374/ancestry @@ -186,9 +191,9 @@ API (pushing repos foo/bar): **Headers**: Authorization: Token signature=123abc,repository=”foo/bar”,access=write **Action**:: - - Index: + - Index: will invalidate the token. - - Registry: + - Registry: grants a session (if token is approved) and fetches the images id 5. (Docker -> Registry) PUT /v1/images/98765432_parent/json @@ -223,7 +228,7 @@ API (pushing repos foo/bar): **Body**: (The image, id’s, tags and checksums) - [{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, + [{“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, “checksum”: “b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”}] **Return** HTTP 204 @@ -240,8 +245,8 @@ API (pushing repos foo/bar): The Index has two main purposes (along with its fancy social features): - Resolve short names (to avoid passing absolute URLs all the time) - - username/projectname -> https://registry.docker.io/users//repositories// - - team/projectname -> https://registry.docker.io/team//repositories// + - username/projectname -> \https://registry.docker.io/users//repositories// + - team/projectname -> \https://registry.docker.io/team//repositories// - Authenticate a user as a repos owner (for a central referenced repository) 3.1 Without an Index @@ -296,7 +301,7 @@ POST /v1/users {"email": "sam@dotcloud.com", "password": "toto42", "username": "foobar"'} **Validation**: - - **username** : min 4 character, max 30 characters, all lowercase no special characters. + - **username** : min 4 character, max 30 characters, all lowercase no special characters. - **password**: min 5 characters **Valid**: return HTTP 200 @@ -387,7 +392,7 @@ PUT /v1/repositories///images **Body**: [ {“id”: “9e89cc6f0bc3c38722009fe6857087b486531f9a779a0c17e3ed29dae8f12c4f”, “checksum”: “sha256:b486531f9a779a0c17e3ed29dae8f12c4f9e89cc6f0bc3c38722009fe6857087”} ] - + **Return** 204 5. Chaining Registries diff --git a/components/engine/docs/sources/builder/index.rst b/components/engine/docs/sources/builder/index.rst deleted file mode 100644 index 170be1a5ab..0000000000 --- a/components/engine/docs/sources/builder/index.rst +++ /dev/null @@ -1,14 +0,0 @@ -:title: docker documentation -:description: Documentation for docker builder -:keywords: docker, builder, dockerfile - - -Builder -======= - -Contents: - -.. toctree:: - :maxdepth: 2 - - basics diff --git a/components/engine/docs/sources/commandline/cli.rst b/components/engine/docs/sources/commandline/cli.rst index 47ecb79e67..1a341d3e5d 100644 --- a/components/engine/docs/sources/commandline/cli.rst +++ b/components/engine/docs/sources/commandline/cli.rst @@ -4,7 +4,7 @@ .. _cli: -Command Line Interface +Overview ====================== Docker Usage @@ -24,7 +24,7 @@ Available Commands ~~~~~~~~~~~~~~~~~~ .. toctree:: - :maxdepth: 1 + :maxdepth: 2 command/attach command/build diff --git a/components/engine/docs/sources/commandline/command/commit.rst b/components/engine/docs/sources/commandline/command/commit.rst index c73f8d1898..1d5c503414 100644 --- a/components/engine/docs/sources/commandline/command/commit.rst +++ b/components/engine/docs/sources/commandline/command/commit.rst @@ -16,6 +16,7 @@ Full -run example:: {"Hostname": "", "User": "", + "CpuShares": 0, "Memory": 0, "MemorySwap": 0, "PortSpecs": ["22", "80", "443"], diff --git a/components/engine/docs/sources/commandline/command/run.rst b/components/engine/docs/sources/commandline/command/run.rst index d5e571b41b..95fb208dd3 100644 --- a/components/engine/docs/sources/commandline/command/run.rst +++ b/components/engine/docs/sources/commandline/command/run.rst @@ -9,6 +9,7 @@ Run a command in a new container -a=map[]: Attach to stdin, stdout or stderr. + -c=0: CPU shares (relative weight) -d=false: Detached mode: leave the container running in the background -e=[]: Set environment variables -h="": Container host name diff --git a/components/engine/docs/sources/commandline/index.rst b/components/engine/docs/sources/commandline/index.rst index 72290fa7a8..fecf8e4885 100644 --- a/components/engine/docs/sources/commandline/index.rst +++ b/components/engine/docs/sources/commandline/index.rst @@ -9,8 +9,33 @@ Commands Contents: .. toctree:: - :maxdepth: 3 + :maxdepth: 1 - basics - workingwithrepository cli + attach + build + commit + diff + export + history + images + import + info + inspect + kill + login + logs + port + ps + pull + push + restart + rm + rmi + run + search + start + stop + tag + version + wait \ No newline at end of file diff --git a/components/engine/docs/sources/commandline/workingwithrepository.rst b/components/engine/docs/sources/commandline/workingwithrepository.rst deleted file mode 100644 index ae749fdb03..0000000000 --- a/components/engine/docs/sources/commandline/workingwithrepository.rst +++ /dev/null @@ -1,42 +0,0 @@ -.. _working_with_the_repository: - -Working with the repository -============================ - -Connecting to the repository ----------------------------- - -You create a user on the central docker repository by running - -.. code-block:: bash - - docker login - - -If your username does not exist it will prompt you to also enter a password and your e-mail address. It will then -automatically log you in. - - -Committing a container to a named image ---------------------------------------- - -In order to commit to the repository it is required to have committed your container to an image with your namespace. - -.. code-block:: bash - - # for example docker commit $CONTAINER_ID dhrp/kickassapp - docker commit / - - -Pushing a container to the repository ------------------------------------------ - -In order to push an image to the repository you need to have committed your container to a named image (see above) - -Now you can commit this image to the repository - -.. code-block:: bash - - # for example docker push dhrp/kickassapp - docker push - diff --git a/components/engine/docs/sources/static_files/lego_docker.jpg b/components/engine/docs/sources/concepts/images/lego_docker.jpg similarity index 100% rename from components/engine/docs/sources/static_files/lego_docker.jpg rename to components/engine/docs/sources/concepts/images/lego_docker.jpg diff --git a/components/engine/docs/sources/concepts/index.rst b/components/engine/docs/sources/concepts/index.rst index 9156524999..d8e1af5770 100644 --- a/components/engine/docs/sources/concepts/index.rst +++ b/components/engine/docs/sources/concepts/index.rst @@ -12,6 +12,6 @@ Contents: .. toctree:: :maxdepth: 1 - introduction + ../index buildingblocks diff --git a/components/engine/docs/sources/concepts/introduction.rst b/components/engine/docs/sources/concepts/introduction.rst index b7e1b04f05..fcdd37a791 100644 --- a/components/engine/docs/sources/concepts/introduction.rst +++ b/components/engine/docs/sources/concepts/introduction.rst @@ -2,8 +2,6 @@ :description: An introduction to docker and standard containers? :keywords: containers, lxc, concepts, explanation -.. _introduction: - Introduction ============ @@ -20,7 +18,7 @@ Docker is a great building block for automating distributed systems: large-scale - **Isolation** docker isolates processes from each other and from the underlying host, using lightweight containers. - **Repeatability** Because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run. -.. image:: http://www.docker.io/_static/lego_docker.jpg +.. image:: images/lego_docker.jpg What is a Standard Container? diff --git a/components/engine/docs/sources/conf.py b/components/engine/docs/sources/conf.py index 4c54d8bb62..d443d34052 100644 --- a/components/engine/docs/sources/conf.py +++ b/components/engine/docs/sources/conf.py @@ -41,7 +41,7 @@ html_add_permalinks = None # The master toctree document. -master_doc = 'index' +master_doc = 'toctree' # General information about the project. project = u'Docker' diff --git a/components/engine/docs/sources/contributing/devenvironment.rst b/components/engine/docs/sources/contributing/devenvironment.rst index ea5821b7da..0d202596c8 100644 --- a/components/engine/docs/sources/contributing/devenvironment.rst +++ b/components/engine/docs/sources/contributing/devenvironment.rst @@ -16,7 +16,7 @@ Instructions that have been verified to work on Ubuntu 12.10, mkdir -p $GOPATH/src/github.com/dotcloud cd $GOPATH/src/github.com/dotcloud - git clone git@github.com:dotcloud/docker.git + git clone git://github.com/dotcloud/docker.git cd docker go get -v github.com/dotcloud/docker/... diff --git a/components/engine/docs/sources/dotcloud.yml b/components/engine/docs/sources/dotcloud.yml deleted file mode 100644 index 5a8f50f9e9..0000000000 --- a/components/engine/docs/sources/dotcloud.yml +++ /dev/null @@ -1,2 +0,0 @@ -www: - type: static \ No newline at end of file diff --git a/components/engine/docs/sources/examples/couchdb_data_volumes.rst b/components/engine/docs/sources/examples/couchdb_data_volumes.rst index df1b5299a4..1b1d7ff79c 100644 --- a/components/engine/docs/sources/examples/couchdb_data_volumes.rst +++ b/components/engine/docs/sources/examples/couchdb_data_volumes.rst @@ -5,7 +5,7 @@ .. _running_couchdb_service: Create a CouchDB service -====================== +======================== .. include:: example_header.inc diff --git a/components/engine/docs/sources/examples/python_web_app.rst b/components/engine/docs/sources/examples/python_web_app.rst index 33caa52c1e..992a09dc42 100644 --- a/components/engine/docs/sources/examples/python_web_app.rst +++ b/components/engine/docs/sources/examples/python_web_app.rst @@ -58,7 +58,7 @@ Use the new image we just created and create a new container with network port 5 .. code-block:: bash docker logs $WEB_WORKER - * Running on http://0.0.0.0:5000/ + * Running on \http://0.0.0.0:5000/ view the logs for the new container using the WEB_WORKER variable, and if everything worked as planned you should see the line "Running on http://0.0.0.0:5000/" in the log output. @@ -70,7 +70,7 @@ lookup the public-facing port which is NAT-ed store the private port used by the .. code-block:: bash - curl http://`hostname`:$WEB_PORT + curl \http://`hostname`:$WEB_PORT Hello world! access the web app using curl. If everything worked as planned you should see the line "Hello world!" inside of your console. diff --git a/components/engine/docs/sources/faq.rst b/components/engine/docs/sources/faq.rst index 51fc00b306..b96ed06437 100644 --- a/components/engine/docs/sources/faq.rst +++ b/components/engine/docs/sources/faq.rst @@ -15,7 +15,7 @@ Most frequently asked questions. 3. **Does Docker run on Mac OS X or Windows?** - Not at this time, Docker currently only runs on Linux, but you can use VirtualBox to run Docker in a virtual machine on your box, and get the best of both worlds. Check out the MacOSX_ and Windows_ intallation guides. + Not at this time, Docker currently only runs on Linux, but you can use VirtualBox to run Docker in a virtual machine on your box, and get the best of both worlds. Check out the MacOSX_ and Windows_ installation guides. 4. **How do containers compare to virtual machines?** @@ -35,8 +35,8 @@ Most frequently asked questions. * `Ask questions on Stackoverflow`_ * `Join the conversation on Twitter`_ - .. _Windows: ../documentation/installation/windows.html - .. _MacOSX: ../documentation/installation/macos.html + .. _Windows: ../installation/windows/ + .. _MacOSX: ../installation/vagrant/ .. _the repo: http://www.github.com/dotcloud/docker .. _IRC\: docker on freenode: irc://chat.freenode.net#docker .. _Github: http://www.github.com/dotcloud/docker diff --git a/components/engine/docs/sources/gettingstarted/index.html b/components/engine/docs/sources/gettingstarted/index.html deleted file mode 100644 index 96175d6dec..0000000000 --- a/components/engine/docs/sources/gettingstarted/index.html +++ /dev/null @@ -1,210 +0,0 @@ - - - - - - - - - - Docker - the Linux container runtime - - - - - - - - - - - - - - - - - - - - - - - -
-
-

GETTING STARTED

-
-
- -
- -
-
- Docker is still under heavy development. It should not yet be used in production. Check the repo for recent progress. -
-
-
-
-

- - Installing on Ubuntu

- -

Requirements

-
    -
  • Ubuntu 12.04 (LTS) (64-bit)
  • -
  • or Ubuntu 12.10 (quantal) (64-bit)
  • -
-
    -
  1. -

    Install dependencies

    - The linux-image-extra package is only needed on standard Ubuntu EC2 AMIs in order to install the aufs kernel module. -
    sudo apt-get install linux-image-extra-`uname -r`
    - - -
  2. -
  3. -

    Install Docker

    -

    Add the Ubuntu PPA (Personal Package Archive) sources to your apt sources list, update and install.

    -

    You may see some warnings that the GPG keys cannot be verified.

    -
    -
    sudo sh -c "echo 'deb http://ppa.launchpad.net/dotcloud/lxc-docker/ubuntu precise main' >> /etc/apt/sources.list"
    -
    sudo apt-get update
    -
    sudo apt-get install lxc-docker
    -
    - - -
  4. - -
  5. -

    Run!

    - -
    -
    docker run -i -t ubuntu /bin/bash
    -
    -
  6. - Continue with the Hello world example. -
-
- -
-

Contributing to Docker

- -

Want to hack on Docker? Awesome! We have some instructions to get you started. They are probably not perfect, please let us know if anything feels wrong or incomplete.

-
- -
-
-
-

Quick install on other operating systems

-

For other operating systems we recommend and provide a streamlined install with virtualbox, - vagrant and an Ubuntu virtual machine.

- - - -
- -
-

More resources

- -
- - -
-
- Fill out my online form. -
- -
- -
-
-
- - -
-
-
- -
- -
-
- -
-
- -
-
- - - - - - - - - - - diff --git a/components/engine/docs/sources/index.html b/components/engine/docs/sources/index.html deleted file mode 100644 index 44a1cc737c..0000000000 --- a/components/engine/docs/sources/index.html +++ /dev/null @@ -1,314 +0,0 @@ - - - - - - - - - - - Docker - the Linux container engine - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- -
-
- -
-
- - -

The Linux container engine

-
- -
- -
- Docker is an open-source engine which automates the deployment of applications as highly portable, self-sufficient containers which are independent of hardware, language, framework, packaging system and hosting provider. -
- -
- - - - -
- -
-
- -
-
- -
-
-
-
-
- -
-
- -
-
-

Heterogeneous payloads

-

Any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all.

-

Any server

-

Docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.

-

Isolation

-

Docker isolates processes from each other and from the underlying host, using lightweight containers.

-

Repeatability

-

Because each container is isolated in its own filesystem, they behave the same regardless of where, when, and alongside what they run.

-
-
-
-
-

New! Docker Index

- On the Docker Index you can find and explore pre-made container images. It allows you to share your images and download them. - -

- -
- DOCKER index -
-
-   - - -
-
-
- Fill out my online form. -
- -
-
-
- -
- - - - -
-
-
-
- - John Willis @botchagalupe: IMHO docker is to paas what chef was to Iaas 4 years ago -
-
-
-
- - John Feminella ‏@superninjarobot: So, @getdocker is pure excellence. If you've ever wished for arbitrary, PaaS-agnostic, lxc/aufs Linux containers, this is your jam! -
-
-
-
-
-
- - David Romulan ‏@destructuring: I haven't had this much fun since AWS -
-
-
-
- - Ricardo Gladwell ‏@rgladwell: wow @getdocker is either amazing or totally stupid -
-
- -
-
- -
-
-
- -
- -

Notable features

- -
    -
  • Filesystem isolation: each process container runs in a completely separate root filesystem.
  • -
  • Resource isolation: system resources like cpu and memory can be allocated differently to each process container, using cgroups.
  • -
  • Network isolation: each process container runs in its own network namespace, with a virtual interface and IP address of its own.
  • -
  • Copy-on-write: root filesystems are created using copy-on-write, which makes deployment extremeley fast, memory-cheap and disk-cheap.
  • -
  • Logging: the standard streams (stdout/stderr/stdin) of each process container is collected and logged for real-time or batch retrieval.
  • -
  • Change management: changes to a container's filesystem can be committed into a new image and re-used to create more containers. No templating or manual configuration required.
  • -
  • Interactive shell: docker can allocate a pseudo-tty and attach to the standard input of any container, for example to run a throwaway interactive shell.
  • -
- -

Under the hood

- -

Under the hood, Docker is built on the following components:

- -
    -
  • The cgroup and namespacing capabilities of the Linux kernel;
  • -
  • AUFS, a powerful union filesystem with copy-on-write capabilities;
  • -
  • The Go programming language;
  • -
  • lxc, a set of convenience scripts to simplify the creation of linux containers.
  • -
- -

Who started it

-

- Docker is an open-source implementation of the deployment engine which powers dotCloud, a popular Platform-as-a-Service.

- -

It benefits directly from the experience accumulated over several years of large-scale operation and support of hundreds of thousands - of applications and databases. -

- -
-
- -
- - -
-

Twitter

- - -
- -
-
- -
- - -
-
-
-
- - Docker is a project by dotCloud - -
-
- -
-
- -
-
- -
-
- - - - - - - - - - - - diff --git a/components/engine/docs/sources/index.rst b/components/engine/docs/sources/index.rst index 4c46653808..172f82083c 100644 --- a/components/engine/docs/sources/index.rst +++ b/components/engine/docs/sources/index.rst @@ -1,25 +1,127 @@ -:title: docker documentation -:description: docker documentation -:keywords: +:title: Introduction +:description: An introduction to docker and standard containers? +:keywords: containers, lxc, concepts, explanation -Documentation -============= +.. _introduction: -This documentation has the following resources: +Introduction +============ -.. toctree:: - :maxdepth: 1 +Docker - The Linux container runtime +------------------------------------ - concepts/index - installation/index - examples/index - contributing/index - commandline/index - registry/index - index/index - builder/index - remote-api/index - faq +Docker complements LXC with a high-level API which operates at the process level. It runs unix processes with strong guarantees of isolation and repeatability across servers. + +Docker is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc. -.. image:: http://www.docker.io/_static/lego_docker.jpg +- **Heterogeneous payloads** Any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all. +- **Any server** Docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments. +- **Isolation** docker isolates processes from each other and from the underlying host, using lightweight containers. +- **Repeatability** Because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run. + +.. image:: concepts/images/lego_docker.jpg + + +What is a Standard Container? +----------------------------- + +Docker defines a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in +a format that is self-describing and portable, so that any compliant runtime can run it without extra dependency, regardless of the underlying machine and the contents of the container. + +The spec for Standard Containers is currently work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment. + +A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery. + +Standard operations +~~~~~~~~~~~~~~~~~~~ + +Just like shipping containers, Standard Containers define a set of STANDARD OPERATIONS. Shipping containers can be lifted, stacked, locked, loaded, unloaded and labelled. Similarly, standard containers can be started, stopped, copied, snapshotted, downloaded, uploaded and tagged. + + +Content-agnostic +~~~~~~~~~~~~~~~~~~~ + +Just like shipping containers, Standard Containers are CONTENT-AGNOSTIC: all standard operations have the same effect regardless of the contents. A shipping container will be stacked in exactly the same way whether it contains Vietnamese powder coffee or spare Maserati parts. Similarly, Standard Containers are started or uploaded in the same way whether they contain a postgres database, a php application with its dependencies and application server, or Java build artifacts. + + +Infrastructure-agnostic +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Both types of containers are INFRASTRUCTURE-AGNOSTIC: they can be transported to thousands of facilities around the world, and manipulated by a wide variety of equipment. A shipping container can be packed in a factory in Ukraine, transported by truck to the nearest routing center, stacked onto a train, loaded into a German boat by an Australian-built crane, stored in a warehouse at a US facility, etc. Similarly, a standard container can be bundled on my laptop, uploaded to S3, downloaded, run and snapshotted by a build server at Equinix in Virginia, uploaded to 10 staging servers in a home-made Openstack cluster, then sent to 30 production instances across 3 EC2 regions. + + +Designed for automation +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Because they offer the same standard operations regardless of content and infrastructure, Standard Containers, just like their physical counterpart, are extremely well-suited for automation. In fact, you could say automation is their secret weapon. + +Many things that once required time-consuming and error-prone human effort can now be programmed. Before shipping containers, a bag of powder coffee was hauled, dragged, dropped, rolled and stacked by 10 different people in 10 different locations by the time it reached its destination. 1 out of 50 disappeared. 1 out of 20 was damaged. The process was slow, inefficient and cost a fortune - and was entirely different depending on the facility and the type of goods. + +Similarly, before Standard Containers, by the time a software component ran in production, it had been individually built, configured, bundled, documented, patched, vendored, templated, tweaked and instrumented by 10 different people on 10 different computers. Builds failed, libraries conflicted, mirrors crashed, post-it notes were lost, logs were misplaced, cluster updates were half-broken. The process was slow, inefficient and cost a fortune - and was entirely different depending on the language and infrastructure provider. + + +Industrial-grade delivery +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away. + +With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality. + + +Standard Container Specification +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +(TODO) + +Image format +~~~~~~~~~~~~ + +Standard operations +~~~~~~~~~~~~~~~~~~~ + +- Copy +- Run +- Stop +- Wait +- Commit +- Attach standard streams +- List filesystem changes +- ... + +Execution environment +~~~~~~~~~~~~~~~~~~~~~ + +Root filesystem +^^^^^^^^^^^^^^^ + +Environment variables +^^^^^^^^^^^^^^^^^^^^^ + +Process arguments +^^^^^^^^^^^^^^^^^ + +Networking +^^^^^^^^^^ + +Process namespacing +^^^^^^^^^^^^^^^^^^^ + +Resource limits +^^^^^^^^^^^^^^^ + +Process monitoring +^^^^^^^^^^^^^^^^^^ + +Logging +^^^^^^^ + +Signals +^^^^^^^ + +Pseudo-terminal allocation +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Security +^^^^^^^^ + diff --git a/components/engine/docs/sources/index/index.rst b/components/engine/docs/sources/index/index.rst deleted file mode 100644 index 7637a4e779..0000000000 --- a/components/engine/docs/sources/index/index.rst +++ /dev/null @@ -1,15 +0,0 @@ -:title: Docker Index documentation -:description: Documentation for docker Index -:keywords: docker, index, api - - - -Index -===== - -Contents: - -.. toctree:: - :maxdepth: 2 - - search diff --git a/components/engine/docs/sources/index/variable.rst b/components/engine/docs/sources/index/variable.rst new file mode 100644 index 0000000000..efbcfae80c --- /dev/null +++ b/components/engine/docs/sources/index/variable.rst @@ -0,0 +1,23 @@ +================================= +Docker Index Environment Variable +================================= + +Variable +-------- + +.. code-block:: sh + + DOCKER_INDEX_URL + +Setting this environment variable on the docker server will change the URL docker index. +This address is used in commands such as ``docker login``, ``docker push`` and ``docker pull``. +The docker daemon doesn't need to be restarted for this parameter to take effect. + +Example +------- + +.. code-block:: sh + + docker -d & + export DOCKER_INDEX_URL="https://index.docker.io" + diff --git a/components/engine/docs/sources/installation/amazon.rst b/components/engine/docs/sources/installation/amazon.rst index 012c78f401..64ff20f8be 100644 --- a/components/engine/docs/sources/installation/amazon.rst +++ b/components/engine/docs/sources/installation/amazon.rst @@ -68,7 +68,7 @@ Docker can now be installed on Amazon EC2 with a single vagrant command. Vagrant If it stalls indefinitely on ``[default] Waiting for SSH to become available...``, Double check your default security zone on AWS includes rights to SSH (port 22) to your container. - If you have an advanced AWS setup, you might want to have a look at the https://github.com/mitchellh/vagrant-aws + If you have an advanced AWS setup, you might want to have a look at https://github.com/mitchellh/vagrant-aws 7. Connect to your machine diff --git a/components/engine/docs/sources/installation/binaries.rst b/components/engine/docs/sources/installation/binaries.rst index 2607f3680f..25d13ab68e 100644 --- a/components/engine/docs/sources/installation/binaries.rst +++ b/components/engine/docs/sources/installation/binaries.rst @@ -5,48 +5,58 @@ Binaries **Please note this project is currently under heavy development. It should not be used in production.** +**This instruction set is meant for hackers who want to try out Docker on a variety of environments.** Right now, the officially supported distributions are: -- Ubuntu 12.04 (precise LTS) (64-bit) -- Ubuntu 12.10 (quantal) (64-bit) +- :ref:`ubuntu_precise` +- :ref:`ubuntu_raring` -Install dependencies: ---------------------- +But we know people have had success running it under -:: +- Debian +- Suse +- :ref:`arch_linux` - sudo apt-get install lxc bsdtar - sudo apt-get install linux-image-extra-`uname -r` -The linux-image-extra package is needed on standard Ubuntu EC2 AMIs in order to install the aufs kernel module. +Dependencies: +------------- -Install the docker binary: +* 3.8 Kernel +* AUFS filesystem support +* lxc +* bsdtar -:: + +Get the docker binary: +---------------------- + +.. code-block:: bash wget http://get.docker.io/builds/Linux/x86_64/docker-latest.tgz tar -xf docker-latest.tgz - sudo cp ./docker-latest/docker /usr/local/bin - -Note: docker currently only supports 64-bit Linux hosts. Run the docker daemon --------------------- -:: +.. code-block:: bash - sudo docker -d & + # start the docker in daemon mode from the directory you unpacked + sudo ./docker -d & Run your first container! ------------------------- -:: +.. code-block:: bash - docker run -i -t ubuntu /bin/bash + # check your docker version + ./docker version + + # run a container and open an interactive shell in the container + ./docker run -i -t ubuntu /bin/bash diff --git a/components/engine/docs/sources/installation/index.rst b/components/engine/docs/sources/installation/index.rst index 0726d9b715..1976f30ba0 100644 --- a/components/engine/docs/sources/installation/index.rst +++ b/components/engine/docs/sources/installation/index.rst @@ -14,8 +14,10 @@ Contents: ubuntulinux binaries - archlinux vagrant windows amazon + rackspace + archlinux upgrading + kernel diff --git a/components/engine/docs/sources/installation/kernel.rst b/components/engine/docs/sources/installation/kernel.rst new file mode 100644 index 0000000000..2ec5940a7f --- /dev/null +++ b/components/engine/docs/sources/installation/kernel.rst @@ -0,0 +1,149 @@ +.. _kernel: + +Kernel Requirements +=================== + + The officially supported kernel is the one recommended by the + :ref:`ubuntu_linux` installation path. It is the one that most developers + will use, and the one that receives the most attention from the core + contributors. If you decide to go with a different kernel and hit a bug, + please try to reproduce it with the official kernels first. + +If for some reason you cannot or do not want to use the "official" kernels, +here is some technical background about the features (both optional and +mandatory) that docker needs to run successfully. + +In short, you need kernel version 3.8 (or above), compiled to include +`AUFS support `_. Of course, you need to +enable cgroups and namespaces. + + +Namespaces and Cgroups +---------------------- + +You need to enable namespaces and cgroups, to the extend of what is needed +to run LXC containers. Technically, while namespaces have been introduced +in the early 2.6 kernels, we do not advise to try any kernel before 2.6.32 +to run LXC containers. Note that 2.6.32 has some documented issues regarding +network namespace setup and teardown; those issues are not a risk if you +run containers in a private environment, but can lead to denial-of-service +attacks if you want to run untrusted code in your containers. For more details, +see `[LP#720095 `_. + +Kernels 2.6.38, and every version since 3.2, have been deployed successfully +to run containerized production workloads. Feature-wise, there is no huge +improvement between 2.6.38 and up to 3.6 (as far as docker is concerned!). + +Starting with version 3.7, the kernel has basic support for +`Checkpoint/Restore In Userspace `_, which is not used by +docker at this point, but allows to suspend the state of a container to +disk and resume it later. + +Version 3.8 provides improvements in stability, which are deemed necessary +for the operation of docker. Versions 3.2 to 3.5 have been shown to +exhibit a reproducible bug (for more details, see issue +`#407 `_). + +Version 3.8 also brings better support for the +`setns() syscall `_ -- but this should not +be a concern since docker does not leverage on this feature for now. + +If you want a technical overview about those concepts, you might +want to check those articles on dotCloud's blog: +`about namespaces `_ +and `about cgroups `_. + + +Important Note About Pre-3.8 Kernels +------------------------------------ + +As mentioned above, kernels before 3.8 are not stable when used with docker. +In some circumstances, you will experience kernel "oopses", or even crashes. +The symptoms include: + +- a container being killed in the middle of an operation (e.g. an ``apt-get`` + command doesn't complete); +- kernel messages including mentioning calls to ``mntput`` or + ``d_hash_and_lookup``; +- kernel crash causing the machine to freeze for a few minutes, or even + completely. + +While it is still possible to use older kernels for development, it is +really not advised to do so. + +Docker checks the kernel version when it starts, and emits a warning if it +detects something older than 3.8. + +See issue `#407 `_ for details. + + +Extra Cgroup Controllers +------------------------ + +Most control groups can be enabled or disabled individually. For instance, +you can decide that you do not want to compile support for the CPU or memory +controller. In some cases, the feature can be enabled or disabled at boot +time. It is worth mentioning that some distributions (like Debian) disable +"expensive" features, like the memory controller, because they can have +a significant performance impact. + +In the specific case of the memory cgroup, docker will detect if the cgroup +is available or not. If it's not, it will print a warning, and it won't +use the feature. If you want to enable that feature -- read on! + + +Memory and Swap Accounting on Debian/Ubuntu +------------------------------------------- + +If you use Debian or Ubuntu kernels, and want to enable memory and swap +accounting, you must add the following command-line parameters to your kernel:: + + cgroup_enable=memory swapaccount + +On Debian or Ubuntu systems, if you use the default GRUB bootloader, you can +add those parameters by editing ``/etc/default/grub`` and extending +``GRUB_CMDLINE_LINUX``. Look for the following line:: + + GRUB_CMDLINE_LINUX="" + +And replace it by the following one:: + + GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount" + +Then run ``update-grub``, and reboot. + + +AUFS +---- + +Docker currently relies on AUFS, an unioning filesystem. +While AUFS is included in the kernels built by the Debian and Ubuntu +distributions, is not part of the standard kernel. This means that if +you decide to roll your own kernel, you will have to patch your +kernel tree to add AUFS. The process is documented on +`AUFS webpage `_. + +Note: the AUFS patch is fairly intrusive, but for the record, people have +successfully applied GRSEC and AUFS together, to obtain hardened production +kernels. + +If you want more information about that topic, there is an +`article about AUFS on dotCloud's blog +`_. + + +BTRFS, ZFS, OverlayFS... +------------------------ + +There is ongoing development on docker, to implement support for +`BTRFS `_ +(see github issue `#443 `_). + +People have also showed interest for `ZFS `_ +(using e.g. `ZFS-on-Linux `_) and OverlayFS. +The latter is functionally close to AUFS, and it might end up being included +in the stock kernel; so it's a strong candidate! + +Would you like to `contribute +`_ +support for your favorite filesystem? diff --git a/components/engine/docs/sources/installation/rackspace.rst b/components/engine/docs/sources/installation/rackspace.rst new file mode 100644 index 0000000000..dfb88aee84 --- /dev/null +++ b/components/engine/docs/sources/installation/rackspace.rst @@ -0,0 +1,91 @@ +=============== +Rackspace Cloud +=============== + + Please note this is a community contributed installation path. The only 'official' installation is using the + :ref:`ubuntu_linux` installation path. This version may sometimes be out of date. + + +Installing Docker on Ubuntu proviced by Rackspace is pretty straightforward, and you should mostly be able to follow the +:ref:`ubuntu_linux` installation guide. + +**However, there is one caveat:** + +If you are using any linux not already shipping with the 3.8 kernel you will need to install it. And this is a little +more difficult on Rackspace. + +Rackspace boots their servers using grub's menu.lst and does not like non 'virtual' packages (e.g. xen compatible) +kernels there, although they do work. This makes ``update-grub`` to not have the expected result, and you need to +set the kernel manually. + +**Do not attempt this on a production machine!** + +.. code-block:: bash + + # update apt + apt-get update + + # install the new kernel + apt-get install linux-generic-lts-raring + + +Great, now you have kernel installed in /boot/, next is to make it boot next time. + +.. code-block:: bash + + # find the exact names + find /boot/ -name '*3.8*' + + # this should return some results + + +Now you need to manually edit /boot/grub/menu.lst, you will find a section at the bottom with the existing options. +Copy the top one and substitute the new kernel into that. Make sure the new kernel is on top, and double check kernel +and initrd point to the right files. + +Make special care to double check the kernel and initrd entries. + +.. code-block:: bash + + # now edit /boot/grub/menu.lst + vi /boot/grub/menu.lst + +It will probably look something like this: + +:: + + ## ## End Default Options ## + + title Ubuntu 12.04.2 LTS, kernel 3.8.x generic + root (hd0) + kernel /boot/vmlinuz-3.8.0-19-generic root=/dev/xvda1 ro quiet splash console=hvc0 + initrd /boot/initrd.img-3.8.0-19-generic + + title Ubuntu 12.04.2 LTS, kernel 3.2.0-38-virtual + root (hd0) + kernel /boot/vmlinuz-3.2.0-38-virtual root=/dev/xvda1 ro quiet splash console=hvc0 + initrd /boot/initrd.img-3.2.0-38-virtual + + title Ubuntu 12.04.2 LTS, kernel 3.2.0-38-virtual (recovery mode) + root (hd0) + kernel /boot/vmlinuz-3.2.0-38-virtual root=/dev/xvda1 ro quiet splash single + initrd /boot/initrd.img-3.2.0-38-virtual + + +Reboot server (either via command line or console) + +.. code-block:: bash + + # reboot + +Verify the kernel was updated + +.. code-block:: bash + + uname -a + # Linux docker-12-04 3.8.0-19-generic #30~precise1-Ubuntu SMP Wed May 1 22:26:36 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux + + # nice! 3.8. + + +Now you can finish with the :ref:`ubuntu_linux` instructions. \ No newline at end of file diff --git a/components/engine/docs/sources/installation/ubuntulinux.rst b/components/engine/docs/sources/installation/ubuntulinux.rst index 955e8eb3b0..de4a2bb9ca 100644 --- a/components/engine/docs/sources/installation/ubuntulinux.rst +++ b/components/engine/docs/sources/installation/ubuntulinux.rst @@ -5,20 +5,39 @@ Ubuntu Linux **Please note this project is currently under heavy development. It should not be used in production.** +Right now, the officially supported distribution are: -Right now, the officially supported distributions are: +- :ref:`ubuntu_precise` +- :ref:`ubuntu_raring` + +Docker has the following dependencies + +* Linux kernel 3.8 +* AUFS file system support (we are working on BTRFS support as an alternative) + +.. _ubuntu_precise: + +Ubuntu Precise 12.04 (LTS) (64-bit) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +This installation path should work at all times. -- Ubuntu 12.04 (precise LTS) (64-bit) -- Ubuntu 12.10 (quantal) (64-bit) Dependencies ------------ -The linux-image-extra package is only needed on standard Ubuntu EC2 AMIs in order to install the aufs kernel module. +**Linux kernel 3.8** + +Due to a bug in LXC docker works best on the 3.8 kernel. Precise comes with a 3.2 kernel, so we need to upgrade it. The kernel we install comes with AUFS built in. + .. code-block:: bash - sudo apt-get install linux-image-extra-`uname -r` lxc bsdtar + # install the backported kernel + sudo apt-get update && sudo apt-get install linux-image-3.8.0-19-generic + + # reboot + sudo reboot Installation @@ -28,34 +47,77 @@ Docker is available as a Ubuntu PPA (Personal Package Archive), `hosted on launchpad `_ which makes installing Docker on Ubuntu very easy. +.. code-block:: bash + # Add the PPA sources to your apt sources list. + sudo sh -c "echo 'deb http://ppa.launchpad.net/dotcloud/lxc-docker/ubuntu precise main' > /etc/apt/sources.list.d/lxc-docker.list" -Add the custom package sources to your apt sources list. Copy and paste the following lines at once. + # Update your sources, you will see a warning. + sudo apt-get update + + # Install, you will see another warning that the package cannot be authenticated. Confirm install. + sudo apt-get install lxc-docker + +Verify it worked .. code-block:: bash - sudo sh -c "echo 'deb http://ppa.launchpad.net/dotcloud/lxc-docker/ubuntu precise main' >> /etc/apt/sources.list" + # download the base 'ubuntu' container and run bash inside it while setting up an interactive shell + docker run -i -t ubuntu /bin/bash + + # type 'exit' to exit -Update your sources. You will see a warning that GPG signatures cannot be verified. +**Done!**, now continue with the :ref:`hello_world` example. + +.. _ubuntu_raring: + +Ubuntu Raring 13.04 (64 bit) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Dependencies +------------ + +**AUFS filesystem support** + +Ubuntu Raring already comes with the 3.8 kernel, so we don't need to install it. However, not all systems +have AUFS filesystem support enabled, so we need to install it. .. code-block:: bash sudo apt-get update + sudo apt-get install linux-image-extra-`uname -r` + +Installation +------------ + +Docker is available as a Ubuntu PPA (Personal Package Archive), +`hosted on launchpad `_ +which makes installing Docker on Ubuntu very easy. -Now install it, you will see another warning that the package cannot be authenticated. Confirm install. +Add the custom package sources to your apt sources list. .. code-block:: bash - curl get.docker.io | sudo sh -x + # add the sources to your apt + sudo add-apt-repository ppa:dotcloud/lxc-docker + + # update + sudo apt-get update + + # install + sudo apt-get install lxc-docker Verify it worked .. code-block:: bash - docker + # download the base 'ubuntu' container and run bash inside it while setting up an interactive shell + docker run -i -t ubuntu /bin/bash + + # type exit to exit **Done!**, now continue with the :ref:`hello_world` example. diff --git a/components/engine/docs/sources/installation/upgrading.rst b/components/engine/docs/sources/installation/upgrading.rst index a5172b6d76..8dfde73891 100644 --- a/components/engine/docs/sources/installation/upgrading.rst +++ b/components/engine/docs/sources/installation/upgrading.rst @@ -3,38 +3,53 @@ Upgrading ============ -These instructions are for upgrading your Docker binary for when you had a custom (non package manager) installation. -If you istalled docker using apt-get, use that to upgrade. +**These instructions are for upgrading Docker** -Get the latest docker binary: +After normal installation +------------------------- -:: +If you installed Docker normally using apt-get or used Vagrant, use apt-get to upgrade. - wget http://get.docker.io/builds/$(uname -s)/$(uname -m)/docker-latest.tgz +.. code-block:: bash + + # update your sources list + sudo apt-get update + + # install the latest + sudo apt-get install lxc-docker +After manual installation +------------------------- -Unpack it to your current dir +If you installed the Docker binary -:: +.. code-block:: bash + + # kill the running docker daemon + killall docker + + +.. code-block:: bash + + # get the latest binary + wget http://get.docker.io/builds/Linux/x86_64/docker-latest.tgz + + +.. code-block:: bash + + # Unpack it to your current dir tar -xf docker-latest.tgz -Stop your current daemon. How you stop your daemon depends on how you started it. +Start docker in daemon mode (-d) and disconnect (&) starting ./docker will start the version in your current dir rather than a version which +might reside in your path. -- If you started the daemon manually (``sudo docker -d``), you can just kill the process: ``killall docker`` -- If the process was started using upstart (the ubuntu startup daemon), you may need to use that to stop it - - -Start docker in daemon mode (-d) and disconnect (&) starting ./docker will start the version in your current dir rather -than the one in your PATH. - -Now start the daemon - -:: +.. code-block:: bash + # start the new version sudo ./docker -d & diff --git a/components/engine/docs/sources/installation/vagrant.rst b/components/engine/docs/sources/installation/vagrant.rst index 465a6c3388..d1a76b5a2b 100644 --- a/components/engine/docs/sources/installation/vagrant.rst +++ b/components/engine/docs/sources/installation/vagrant.rst @@ -1,14 +1,10 @@ .. _install_using_vagrant: -Using Vagrant -============= +Using Vagrant (Mac, Linux) +========================== - Please note this is a community contributed installation path. The only 'official' installation is using the - :ref:`ubuntu_linux` installation path. This version may sometimes be out of date. - -**Requirements:** -This guide will setup a new virtual machine with docker installed on your computer. This works on most operating +This guide will setup a new virtualbox virtual machine with docker installed on your computer. This works on most operating systems, including MacOX, Windows, Linux, FreeBSD and others. If you can install these and have at least 400Mb RAM to spare you should be good. diff --git a/components/engine/docs/sources/installation/windows.rst b/components/engine/docs/sources/installation/windows.rst index a89d3a9014..230ac78051 100644 --- a/components/engine/docs/sources/installation/windows.rst +++ b/components/engine/docs/sources/installation/windows.rst @@ -3,8 +3,8 @@ :keywords: Docker, Docker documentation, Windows, requirements, virtualbox, vagrant, git, ssh, putty, cygwin -Windows (with Vagrant) -====================== +Using Vagrant (Windows) +======================= Please note this is a community contributed installation path. The only 'official' installation is using the :ref:`ubuntu_linux` installation path. This version may be out of date because it depends on some binaries to be updated and published diff --git a/components/engine/docs/sources/nginx.conf b/components/engine/docs/sources/nginx.conf deleted file mode 100644 index 97ffd2c0e5..0000000000 --- a/components/engine/docs/sources/nginx.conf +++ /dev/null @@ -1,6 +0,0 @@ - -# rule to redirect original links created when hosted on github pages -rewrite ^/documentation/(.*).html http://docs.docker.io/en/latest/$1/ permanent; - -# rewrite the stuff which was on the current page -rewrite ^/gettingstarted.html$ /gettingstarted/ permanent; diff --git a/components/engine/docs/sources/registry/index.rst b/components/engine/docs/sources/registry/index.rst deleted file mode 100644 index d3788f53cc..0000000000 --- a/components/engine/docs/sources/registry/index.rst +++ /dev/null @@ -1,15 +0,0 @@ -:title: docker Registry documentation -:description: Documentation for docker Registry and Registry API -:keywords: docker, registry, api, index - - - -Registry -======== - -Contents: - -.. toctree:: - :maxdepth: 2 - - api diff --git a/components/engine/docs/sources/remote-api/index.rst b/components/engine/docs/sources/remote-api/index.rst deleted file mode 100644 index 5b3b790b56..0000000000 --- a/components/engine/docs/sources/remote-api/index.rst +++ /dev/null @@ -1,15 +0,0 @@ -:title: docker Remote API documentation -:description: Documentation for docker Remote API -:keywords: docker, rest, api, http - - - -Remote API -========== - -Contents: - -.. toctree:: - :maxdepth: 2 - - api diff --git a/components/engine/docs/sources/toctree.rst b/components/engine/docs/sources/toctree.rst new file mode 100644 index 0000000000..09f2a7af5b --- /dev/null +++ b/components/engine/docs/sources/toctree.rst @@ -0,0 +1,22 @@ +:title: docker documentation +:description: docker documentation +:keywords: + +Documentation +============= + +This documentation has the following resources: + +.. toctree:: + :titlesonly: + + concepts/index + installation/index + use/index + examples/index + commandline/index + contributing/index + api/index + faq + +.. image:: concepts/images/lego_docker.jpg diff --git a/components/engine/docs/sources/commandline/basics.rst b/components/engine/docs/sources/use/basics.rst similarity index 97% rename from components/engine/docs/sources/commandline/basics.rst rename to components/engine/docs/sources/use/basics.rst index 8dd8ec9de3..ffd2a7b96c 100644 --- a/components/engine/docs/sources/commandline/basics.rst +++ b/components/engine/docs/sources/use/basics.rst @@ -76,8 +76,8 @@ Expose a service on a TCP port echo "Daemon received: $(docker logs $JOB)" -Committing (saving) an image ------------------------------ +Committing (saving) a container state +------------------------------------- Save your containers state to a container image, so the state can be re-used. diff --git a/components/engine/docs/sources/builder/basics.rst b/components/engine/docs/sources/use/builder.rst similarity index 96% rename from components/engine/docs/sources/builder/basics.rst rename to components/engine/docs/sources/use/builder.rst index 735b2e575f..84d275782e 100644 --- a/components/engine/docs/sources/builder/basics.rst +++ b/components/engine/docs/sources/use/builder.rst @@ -107,8 +107,7 @@ The `ENV` instruction sets the environment variable `` to the value functionally equivalent to prefixing the command with `=` .. note:: - The environment variables are local to the Dockerfile, they will not persist - when a container is run from the resulting image. + The environment variables will persist when a container is run from the resulting image. 2.7 INSERT ---------- @@ -122,6 +121,8 @@ curl was installed within the image. .. note:: The path must include the file name. +.. note:: + This instruction has temporarily disabled 3. Dockerfile Examples ====================== @@ -179,4 +180,4 @@ curl was installed within the image. # Will output something like ===> 695d7793cbe4 # You'll now have two images, 907ad6c2736f with /bar, and 695d7793cbe4 with - # /oink. \ No newline at end of file + # /oink. diff --git a/components/engine/docs/sources/use/index.rst b/components/engine/docs/sources/use/index.rst new file mode 100644 index 0000000000..9939dc7ea8 --- /dev/null +++ b/components/engine/docs/sources/use/index.rst @@ -0,0 +1,19 @@ +:title: docker documentation +:description: -- todo: change me +:keywords: todo: change me + + + +Use +======== + +Contents: + +.. toctree:: + :maxdepth: 1 + + basics + workingwithrepository + builder + puppet + diff --git a/components/engine/docs/sources/use/puppet.rst b/components/engine/docs/sources/use/puppet.rst new file mode 100644 index 0000000000..af2d5c8d57 --- /dev/null +++ b/components/engine/docs/sources/use/puppet.rst @@ -0,0 +1,109 @@ + +.. _install_using_puppet: + +Using Puppet +============= + +.. note:: + + Please note this is a community contributed installation path. The only 'official' installation is using the + :ref:`ubuntu_linux` installation path. This version may sometimes be out of date. + +Requirements +------------ + +To use this guide you'll need a working installation of Puppet from `Puppetlabs `_ . + +The module also currently uses the official PPA so only works with Ubuntu. + +Installation +------------ + +The module is available on the `Puppet Forge `_ +and can be installed using the built-in module tool. + + .. code-block:: bash + + puppet module install garethr/docker + +It can also be found on `GitHub `_ +if you would rather download the source. + +Usage +----- + +The module provides a puppet class for installing docker and two defined types +for managing images and containers. + +Installation +~~~~~~~~~~~~ + + .. code-block:: ruby + + include 'docker' + +Images +~~~~~~ + +The next step is probably to install a docker image, for this we have a +defined type which can be used like so: + + .. code-block:: ruby + + docker::image { 'base': } + +This is equivalent to running: + + .. code-block:: bash + + docker pull base + +Note that it will only if the image of that name does not already exist. +This is downloading a large binary so on first run can take a while. +For that reason this define turns off the default 5 minute timeout +for exec. Note that you can also remove images you no longer need with: + + .. code-block:: ruby + + docker::image { 'base': + ensure => 'absent', + } + +Containers +~~~~~~~~~~ + +Now you have an image you can run commands within a container managed by +docker. + + .. code-block:: ruby + + docker::run { 'helloworld': + image => 'base', + command => '/bin/sh -c "while true; do echo hello world; sleep 1; done"', + } + +This is equivalent to running the following command, but under upstart: + + .. code-block:: bash + + docker run -d base /bin/sh -c "while true; do echo hello world; sleep 1; done" + +Run also contains a number of optional parameters: + + .. code-block:: ruby + + docker::run { 'helloworld': + image => 'base', + command => '/bin/sh -c "while true; do echo hello world; sleep 1; done"', + ports => ['4444', '4555'], + volumes => ['/var/lib/counchdb', '/var/log'], + volumes_from => '6446ea52fbc9', + memory_limit => 10485760, # bytes + username => 'example', + hostname => 'example.com', + env => ['FOO=BAR', 'FOO2=BAR2'], + dns => ['8.8.8.8', '8.8.4.4'], + } + +Note that ports, env, dns and volumes can be set with either a single string +or as above with an array of values. diff --git a/components/engine/docs/sources/use/workingwithrepository.rst b/components/engine/docs/sources/use/workingwithrepository.rst new file mode 100644 index 0000000000..c1ce7f455e --- /dev/null +++ b/components/engine/docs/sources/use/workingwithrepository.rst @@ -0,0 +1,75 @@ +.. _working_with_the_repository: + +Working with the repository +============================ + + +Top-level repositories and user repositories +-------------------------------------------- + +Generally, there are two types of repositories: Top-level repositories which are controlled by the people behind +Docker, and user repositories. + +* Top-level repositories can easily be recognized by not having a / (slash) in their name. These repositories can + generally be trusted. +* User repositories always come in the form of /. This is what your published images will look like. +* User images are not checked, it is therefore up to you whether or not you trust the creator of this image. + + +Find public images available on the index +----------------------------------------- + +Seach by name, namespace or description + +.. code-block:: bash + + docker search + + +Download them simply by their name + +.. code-block:: bash + + docker pull + + +Very similarly you can search for and browse the index online on https://index.docker.io + + +Connecting to the repository +---------------------------- + +You can create a user on the central docker repository online, or by running + +.. code-block:: bash + + docker login + + +If your username does not exist it will prompt you to also enter a password and your e-mail address. It will then +automatically log you in. + + +Committing a container to a named image +--------------------------------------- + +In order to commit to the repository it is required to have committed your container to an image with your namespace. + +.. code-block:: bash + + # for example docker commit $CONTAINER_ID dhrp/kickassapp + docker commit / + + +Pushing a container to the repository +----------------------------------------- + +In order to push an image to the repository you need to have committed your container to a named image (see above) + +Now you can commit this image to the repository + +.. code-block:: bash + + # for example docker push dhrp/kickassapp + docker push + diff --git a/components/engine/docs/theme/docker/layout.html b/components/engine/docs/theme/docker/layout.html index 32955159b0..aa5a24d496 100755 --- a/components/engine/docs/theme/docker/layout.html +++ b/components/engine/docs/theme/docker/layout.html @@ -66,7 +66,7 @@