A very prominent application of AI is to predict new facts based on patterns found in datasets in the format of Knowledge Graphs (KGs). These are graph structures representing knowledge of a domain of interest and/or of a particular organization. The flexibility of KGs when representing incomplete data and versatility to express data from different domains favored its adoption in industry and academia. Google, for instance, has been using KGs for at least a decade to enrich search results. There are also large open KGs such as Wikidata which centralizes information across Wikimedia projects, including Wikipedia. Many KGs are highly incomplete, they do not contain all possible links existing between the entities that they describe. Predicting missing links among the entities of a KG is known in the literature as link prediction. The classical approach is to create mappings of entities (and/or relations) into a vector space and predict new links based on patterns of the data in the KG using neural network models. Such mappings are called Knowledge Graph Embeddings (KGEs) and the interest in creating the best predicting models attracted numerous researchers. Most of the research in KGEs has focused on improving accuracy results of the underlying machine learning models. However such accuracy is often based on biased, erroneous, incomplete data, which can result in corrupted machine learning models that can make wrong predictions in uncontrollable ways, as well as replicate and reinforce biases. A recent line of research aims at improving KGEs by taking advantage of ontologies, which are logical theories that formalize knowledge and often accompany KGs. Ontologies can be expressed in various well-known formalisms for knowledge representation and automated reasoning. In our project, we want to ensure quality guarantees of KGEs, in particular, consistency with respect to logical constraints, expressed as an ontology, and injection of domain knowledge in KGEs.