LLM Zoomcamp 2025: Module 2 Homework

Author

Tony Wu

1 Problem 1

from fastembed import TextEmbedding
model_handle = "jinaai/jina-embeddings-v2-small-en"
embedder = TextEmbedding(model_name=model_handle)
text = "I just discovered the course. Can I join now?"
vector_q = next(embedder.embed(text))
print(f"Minimal value: {min(vector_q)}")
Minimal value: -0.11726373551188797

2 Problem 2

doc = "Can I still join the course after the start date?"   
vector_d = next(embedder.embed(doc))
print(f"Cosine similarity: {vector_q.dot(vector_d)}")
Cosine similarity: 0.9008528856818037

3 Problem 3

import numpy as np
documents = [{'text': "Yes, even if you don't register, you're still eligible to submit the homeworks.\nBe aware, however, that there will be deadlines for turning in the final projects. So don't leave everything for the last minute.",
  'section': 'General course-related questions',
  'question': 'Course - Can I still join the course after the start date?',
  'course': 'data-engineering-zoomcamp'},
 {'text': 'Yes, we will keep all the materials after the course finishes, so you can follow the course at your own pace after it finishes.\nYou can also continue looking at the homeworks and continue preparing for the next cohort. I guess you can also start working on your final capstone project.',
  'section': 'General course-related questions',
  'question': 'Course - Can I follow the course after it finishes?',
  'course': 'data-engineering-zoomcamp'},
 {'text': "The purpose of this document is to capture frequently asked technical questions\nThe exact day and hour of the course will be 15th Jan 2024 at 17h00. The course will start with the first  “Office Hours'' live.1\nSubscribe to course public Google Calendar (it works from Desktop only).\nRegister before the course starts using this link.\nJoin the course Telegram channel with announcements.\nDon’t forget to register in DataTalks.Club's Slack and join the channel.",
  'section': 'General course-related questions',
  'question': 'Course - When will the course start?',
  'course': 'data-engineering-zoomcamp'},
 {'text': 'You can start by installing and setting up all the dependencies and requirements:\nGoogle cloud account\nGoogle Cloud SDK\nPython 3 (installed with Anaconda)\nTerraform\nGit\nLook over the prerequisites and syllabus to see if you are comfortable with these subjects.',
  'section': 'General course-related questions',
  'question': 'Course - What can I do before the course starts?',
  'course': 'data-engineering-zoomcamp'},
 {'text': 'Star the repo! Share it with friends if you find it useful ❣️\nCreate a PR if you see you can improve the text or the structure of the repository.',
  'section': 'General course-related questions',
  'question': 'How can we contribute to the course?',
  'course': 'data-engineering-zoomcamp'}]

doc_texts = [doc["text"] for doc in documents]
doc_vectors = np.array(list(embedder.embed(doc_texts)))
cosine_similarities = doc_vectors.dot(vector_q)
most_similar_index = np.argmax(cosine_similarities)
print(f"Document index with highest similarity: {most_similar_index}")
Document index with highest similarity: 1

4 Problem 4

mod_texts = [doc['question'] + ' ' + doc['text'] for doc in documents]
mod_vectors = np.array(list(embedder.embed(mod_texts)))
cosine_similarities = mod_vectors.dot(vector_q)
most_similar_index = np.argmax(cosine_similarities)
print(f"Document index with highest similarity: {most_similar_index}")
print("Including the question adds additional semantic meaning to the embedding, which\n could change the cosine similarity with the query")
Document index with highest similarity: 0
Including the question adds additional semantic meaning to the embedding, which
 could change the cosine similarity with the query

5 Problem 5

models = TextEmbedding.list_supported_models()
lowest_dim_model = min(models, key=lambda x: x['dim'])
print(f"The smallest dimensionality: {lowest_dim_model['dim']}")
model_handle2 = "BAAI/bge-small-en"
The smallest dimensionality: 384

6 Problem 6

import requests 
from qdrant_client import QdrantClient, models

docs_url = 'https://github.com/alexeygrigorev/llm-rag-workshop/raw/main/notebooks/documents.json'
docs_response = requests.get(docs_url)
documents_raw = docs_response.json()


documents = []

for course in documents_raw:
    course_name = course['course']
    if course_name != 'machine-learning-zoomcamp':
        continue

    for doc in course['documents']:
        doc['course'] = course_name
        documents.append(doc)

EMBEDDING_DIMENSIONALITY = 384
collection_name = "hw2"
qd_client = QdrantClient("http://localhost:6333")
qd_client.delete_collection(collection_name=collection_name)
qd_client.create_collection(
    collection_name=collection_name,
    vectors_config=models.VectorParams(
        size=EMBEDDING_DIMENSIONALITY,
        distance=models.Distance.COSINE
    )
)
qd_client.create_payload_index(
    collection_name=collection_name,
    field_name="course",
    field_schema="keyword"
)
points = []

for i, doc in enumerate(documents):
    text = doc['question'] + ' ' + doc['text']
    vector = models.Document(text=text, model=model_handle2)
    point = models.PointStruct(
        id=i,
        vector=vector,
        payload=doc
    )
    points.append(point)

qd_client.upsert(
    collection_name=collection_name,
    points=points
)

def search(query, limit=1):

    results = qd_client.query_points(
        collection_name=collection_name,
        query=models.Document(
            text=query,
            model=model_handle2
        ),
        limit=limit, # top closest matches
        with_payload=True #to get metadata in the results
    )

    return results

result = search(text)
print(f"Highest score: {result.points[0].score}")
Highest score: 0.99999994