TY - COMP
TI - amzn/milan
AU - Borchert, Tom
AB - Milan is a Scala API and runtime infrastructure for building data-oriented systems, built on top of Apache Flink.
DA - 2019/11/25/T14:52:44Z
PY - 2019
DP - GitHub
LA - Scala
PB - Amazon
UR - https://github.com/amzn/milan
Y2 - 2019/11/27/19:46:21
KW - Implementation
KW - Machine learning
KW - Probabilistic programming
ER -
TY - JOUR
TI - Generative Adversarial Networks
AU - Goodfellow, Ian J.
AU - Pouget-Abadie, Jean
AU - Mirza, Mehdi
AU - Xu, Bing
AU - Warde-Farley, David
AU - Ozair, Sherjil
AU - Courville, Aaron
AU - Bengio, Yoshua
T2 - arXiv:1406.2661 [cs, stat]
AB - We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.
DA - 2014/06/10/
PY - 2014
DP - arXiv.org
UR - http://arxiv.org/abs/1406.2661
Y2 - 2019/11/28/11:44:28
KW - Adversarial attacks
KW - Classical ML
KW - Implementation
KW - Machine learning
ER -
TY - COMP
TI - dmurfet/2simplicialtransformer
AU - Murfet, Daniel
AB - Code for the 2-simplicial Transformer paper. Contribute to dmurfet/2simplicialtransformer development by creating an account on GitHub.
DA - 2019/10/14/T08:10:47Z
PY - 2019
DP - GitHub
LA - Python
UR - https://github.com/dmurfet/2simplicialtransformer
Y2 - 2019/11/22/16:50:05
KW - Abstract machines
KW - Algebra
KW - Implementation
KW - Machine learning
KW - Semantics
ER -
TY - COMP
TI - dmurfet/deeplinearlogic
AU - Murfet, Daniel
AB - Deep learning and linear logic. Contribute to dmurfet/deeplinearlogic development by creating an account on GitHub.
DA - 2018/07/14/T01:08:44Z
PY - 2018
DP - GitHub
LA - Jupyter Notebook
UR - https://github.com/dmurfet/deeplinearlogic
Y2 - 2019/11/22/16:44:43
KW - Categorical ML
KW - Implementation
KW - Linear logic
KW - Machine learning
KW - Semantics
ER -
TY - COMP
TI - dmurfet/polysemantics
AU - Murfet, Daniel
AB - Polynomial semantics of linear logic. Contribute to dmurfet/polysemantics development by creating an account on GitHub.
DA - 2018/04/29/T20:41:43Z
PY - 2018
DP - GitHub
LA - Python
UR - https://github.com/dmurfet/polysemantics
Y2 - 2019/11/22/16:45:35
KW - Categorical ML
KW - Implementation
KW - Linear logic
KW - Machine learning
KW - Semantics
ER -
TY - JOUR
TI - Deep Probabilistic Programming
AU - Tran, Dustin
AU - Hoffman, Matthew D.
AU - Saurous, Rif A.
AU - Brevdo, Eugene
AU - Murphy, Kevin
AU - Blei, David M.
T2 - arXiv:1701.03757 [cs, stat]
AB - We propose Edward, a Turing-complete probabilistic programming language. Edward defines two compositional representations---random variables and inference. By treating inference as a first class citizen, on a par with modeling, we show that probabilistic programming can be as flexible and computationally efficient as traditional deep learning. For flexibility, Edward makes it easy to fit the same model using a variety of composable inference methods, ranging from point estimation to variational inference to MCMC. In addition, Edward can reuse the modeling representation as part of inference, facilitating the design of rich variational models and generative adversarial networks. For efficiency, Edward is integrated into TensorFlow, providing significant speedups over existing probabilistic systems. For example, we show on a benchmark logistic regression task that Edward is at least 35x faster than Stan and 6x faster than PyMC3. Further, Edward incurs no runtime overhead: it is as fast as handwritten TensorFlow.
DA - 2017/03/07/
PY - 2017
DP - arXiv.org
UR - http://arxiv.org/abs/1701.03757
Y2 - 2019/11/27/23:15:14
KW - Bayesian inference
KW - Implementation
KW - Machine learning
KW - Probabilistic programming
ER -
TY - BOOK
TI - Model-Based Machine Learning
AU - Winn, John Michael
AB - This book is unusual for a machine learning text book in that the authors do not review dozens of different algorithms. Instead they introduce all of the key ideas through a series of case studies involving real-world applications. Case studies play a central role because it is only in the context of applications that it makes sense to discuss modelling assumptions. Each chapter therefore introduces one case study which is drawn from a real-world application that has been solved using a model-based approach.
DA - 2019/06//
PY - 2019
DP - Google Books
SP - 400
LA - en
PB - Taylor & Francis Incorporated
SN - 978-1-4987-5681-5
KW - Bayesian inference
KW - Classical ML
KW - Implementation
ER -