Theano is a Python library which is very well adapted for numerical tasks often encountered when dealing with deep learning.
What makes it well adapted for those tasks is the fact that it combines several paradigms for numerical computations, namely:
matrix operations
symbolic variable and function definitions
Just-in-time compilation to CPU or GPU machine code
Since all variables are actually symbolic variables, you need to define a function and fill in the values in order to get a value. For example:
# X, y and w are a matrix and vectors respectively
# E is a scalar that depends on the above variables
# to get the value of E we must define:
Efun = theano.function([X,w,y],
E,allow_input_downcast=True)
While this seems like an unnecessary step, it's actually not. Since Theano now has a representation of the whole expression graph for the Efun function, it can compile and optimize the code so that it can run on both CPU and GPU.
Theano is one of the oldest deep learning libraries out there and a lot of other widely used libraries have been built on top of it.
But Theano heavily relies on the mathematical side of deep learning and data discovery, having similar features to NumPy or Matlab. This is why it's usually used with other libraries in order to achieve a higher level of abstraction.
TensorFlow is developed and maintained by Google. It's the engine behind a lot of features found in Google applications, such as:
recognizing spoken words
translating from one language to another
improving Internet search results
Making it a crucial component in a lot of Google applications. As such, continued support and development is ensured in the long-term, considering how important it is to the current maintainers.
TensorFlow can run with multiple GPUs. This makes it easy to spin up sessions and run the code on different machines without having to stop or restart the program.
Other than having an easy syntax, using Python also gives developers a wide range of some of the most powerful libraries for scientific calculations (NumPy, SciPy, Pandas) without having to switch languages.
You can introduce and retrieve the results of discretionary data on any edge of the graph. You can also combine this with TensorBoard (suite of visualization tools) to get pretty and easy to understand graph visualizations, making debugging even simpler.
TensorFlow is written in Python, with the parts that are crucial for performance implemented in C++. But all of the high-level abstractions and development is done in Python.
For now, Google has only open sourced parts of the AI engine, namely some algorithms that run atop it. The advanced hardware infrastructure that drives this engine is not "open source".
You can choose the back-end for Keras. Simply change the backend field to "theano", "tensorflow", or "cntk".
Theano was discontinued in 2017, so TensorFlow or CNTK would be the better choice.
Keras is a high-level API. It's difficult to customize your model past a point. If you want to build something beyond the application-level, use Theano or TensorFlow. (Keras runs on top of either one of these anyways)
When using Keras you don't have to pull every part of the framework on your project. For example, you can only use training algorithms and not layer implementations. So it works more like a collection of libraries.
CNTK easily outperforms Theano, TensorFlow, Torch 7, and Caffe with its support of "multi-machine-multi-GPU" backends. Such a setup can be built using Azure's GPU Lab.
Infer.NET is free for academic use. However, at this time, commercial use of Infer.NET is limited to Microsoft. No other commercial licenses are available.
Infer.NET supports expectation propagation (including belief propagation as a special case), variational message passing (also known as variational Bayes), max product (for discrete models), and block Gibbs sampling.
You can use Infer.NET to solve many different kinds of machine learning problems, from standard problems like classification, recommendation or clustering through to customised solutions to domain-specific problems.
Being written in Lua instead of the more widely used Python, it's not as accessible to academics as other solutions which are implemented in Python. With Python being one of the most widely used languages in scientific computing.