![]() ![]() Here’s an example chat session with Llama 2: Type 'exit' or 'quit' to exit You can set an alias for the model to make that easier to remember. ![]() My llm-clip plugin provides the CLIP model, loaded via SentenceTransformers. The images that are closest to that location will be the ones that contain happy dogs! Then you can take a text string-like “happy dog”-and embed that into the same space. This means you can create an index for a collection of photos, each placed somewhere in 512-dimensional space. Top of the list for me is CLIP, released by OpenAI in January 2021.ĬLIP has a really impressive trick up its sleeve: it can embed both text and images into the same vector space. It turns out there are some really interesting embedding models for working with binary data. That initial release could only handle embeddings of text-great for things like building semantic search and finding related content, but not capable of handling other types of data. I wrote about LLM’s support for embeddings (including what those are and why they’re interesting) when I released 0.9 last week. Image search by embedding images with CLIP I just released LLM 0.10 with two significant new features: embedding support for binary files and the llm chat command. LLM is my combination CLI tool and Python library for working with Large Language Models. Build an image search engine with llm-clip, chat with models with llm chat two days ago
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |