Machine learning theory and practice is increasing in complexity as methods are applied to more challenging problems. The principal focus has been on maximizing decision performance, but in problems involving the control and allocation of resources there is a need for systems to be robust against the uncertainty associated with decisions. Robust probabilistic inference focuses on accurate estimation of the uncertainty and measurement of the influence of risk in decision-making. Risk can arise from un-modeled uncertainties, incomplete measurements, costs associated with different outcomes, and other sources. To achieve robustness a system must constrain the uncertainty and losses associated with risk. Approaches are needed which provide a framework for defining and measuring the uncertainty and loss in complex systems. An example is the use of generalized entropy functions, such as Renyi and Tsallis entropies, which reshape risk-biased cost functions associated with uncertainty. Papers are sought which explore the influence of risk on uncertainty measures and demonstrate novel machine learning methods to enable systems to be robust against risk.