We propose a novel approach for incorporating prior knowledge into the perceptron. The goal is to update the hypothesis taking into account both label feedback and prior knowledge, in the form of soft polyhedral advice, so as to make increasingly accurate predictions on subsequent rounds. Advice helps speed up and bias learning so that good generalization can be obtained with less data. The updates to the hypothesis use a hybrid loss that takes into account the margins of both the hypothesis and advice on the current point. Analysis of the algorithm via mistake bounds and experimental results demonstrate that advice can speed up learning considerably.