Kubernetes was built on top of several years of experience from Google working on containers in production. It's a little opinionated on how containers should work and behave, but if used correctly it can help you achieve fault-tolerant systems.
Kubernetes is currently supported by Google Compute Engine, Rackspace, Microsoft Azure, and vSphere. Work is being done to support Kubernetes on OpenShift and CloudFoundry.
Because of how opinionated Kubernetes is, it may be necessary to change some things if you decide to use Kubernetes as an orchestration tool in an existing application.
Kubernetes works very well with modern environments (such as CoreOS or Red Hat Atomc) which offer lightweight computing nodes that you don't have to manage, since they are managed for you.
Kubernetes uses labels which are key-value pairs that are attached to objects, usually pods. They are used to specify the characteristics of an object like the version, tier, etc. Labels are used to identify objects or groups of objects according to different characteristics that they may have, for example they can be used to identify all the pods that are included in the backend tier.
Through labels it's easier to do grouping tasks for pods or containers, like moving pods to different groups or assigning them to load-balanced groups.
Kubernetes great for beginners who are just starting to work on clustering. It's probably the quickest and easiest way to start experimenting and learning cluster oriented development.
Kubernetes was not written for docker clustering alone. It uses a different API, configuration and different YAML definitions. So you can't use the Docker CLI or Docker Compose to define your containers. Everything has to be done from scratch.
Since Docker Swarm is a native Docker tool, it exposes the Docker API, making it possible to integrate it and communicate with other Docker tools (CLI, Compose, Krane, etc.).
This also means that containers can be launched with a simple docker run command and Swarm will take care of the rest, such as selecting the appropriate host on which to run the container.
Docker Swarm is much more lightweight than alternatives: Kubernetes and Mesosphere. Kubernetes, for instance, is very complex - it downloads and installs half of the web, where Docker Swarm has much, much smaller footprint.
If the Docker API doesn't support something, then you are pretty much out of luck when it comes to Docker Swarm, because it won't be supported by Swarm either.
Nomad schedules many different types of tasks via Drivers, including: docker containers, forking processes, running jar files, rkt and VMs via QEMU. See more here https://www.nomadproject.io/docs/drivers/index.html
If a task fails, which represents a service, the task is retried. This is configurable. Obviously this is expected for long running services, but this is also especially helpful with batch jobs.
Nomad has the concept of both a long running service and a batch job. When you submit a batch job that contains tasks, if you don't have immediate capacity in your cluster, tasks will queue up in nomad. As capacity frees up, nomad schedules the work. This is great if your workloads are heterogeneous in terms cpu, memory and how long they take to run. You don't have to scale up and down, unless you want faster throughput. If the throughput isn't as important, you can let things run as soon as a resource opens up.
Being focused on one thing only also has its advantages. For one, Nomad is very simple architecturally. There's only a single binary for both clients and servers, it also does not need any external services for any coordination or storage.
You can control and monitor jobs via a JSON based HTTP API. There is even the concept of a blocking query whereby you can use long pulling to wait for status updates about running jobs.
Nomad uses a high-level abstraction of jobs. Jobs are essentially task groups (sets of tasks). Because of this, Nomad allows users to develop and manage complex applications easily, without having to think about the individual containers that make these applications.
Servers and clients both leverage a pre-compiled binary. You execute this binary and point it at a config file. That's it for deployment. Also, the github repo for the project includes sample service config files https://github.com/hashicorp/nomad/tree/master/dist
While other orchestration tools provide much more than just cluster management and scheduling (they also provide things like secrets management, discovery, monitoring, etc.), Nomad follows the Unix philosophy of doing only one thing and doing it well, providing only cluster management and scheduling.