When AI/ML large language model apps deal with objects such as words, sentences, multimedia text, images, video and audio sequences, they describe them with numeric values that can describe a complex data object, such as color, physical size, surface light characteristics, audio spectrum at various frequency levels and so on.
Machine Learning (ML) represents everything as vectors; documents, videos, user behaviors, whatever. The vector representation makes it possible to search, retrieve, rank, and classify different items by similarity and relevance, and is used in applications such as product recommendations, semantic search, image search, anomaly detection, fraud detection, face recognition, and more.
These vectors are called vector embeddings and stored in a vector database, where they are indexed so that similar objects can be found in the database through index searching. A search is not run based on direct user-input data such as keywords or metadata classifications for the stored objects. Instead, we understand, the search term is processed into a vector using the same AI/ML system used to create the object vector embeddings. A search can then look for identical and similar objects.