Vector Search engines use Machine Learning models to offer incredible functionality to operate on your data. We are looking at anything from summarizers (that can summarize any text into a short) sentence), through auto-labelers (that can classify your data tokens), to transformers and vectorizers (that can convert any data – text, image, audio, etc. – into vectors and use that for context-based queries) and many more use cases.
All of these use cases require Machine Learning model inference
– a process of running data through an ML model and calculating an output (e.g. take a paragraph, and summarize into to a short sentence) – which is a compute-heavy process.