Cloud computing provides unique automation opportunities and a reduction of administrative costs. The special circumstances of such environments, such as live virtual machine migrations, could make network anomaly detection more challenging. The thesis looks at several anomaly detection algorithms and evaluates the advantages of estimating the anomaly rate. The theoretical part describes the methods from a mathematical standpoint. The practical part then explores the validation and training methods for time series learning while also chooses the appropriate parameters. The trained models are evaluated on multiple data sets and compared in regards to their ability to detect anomalies, minimise the number of false alarms and their speed.