Tag Archives: python

Customizing LLMs: Full Fine-Tuning

Full Fine-Tuning the traditional method for adapting large language models (LLMs), involving updates to all of the model’s parameters. While more resource-intensive than parameter-efficient fine-tuning (PEFT) and other methods, it allows for deeper and more comprehensive customization, especially when adapting to significantly different tasks or domains.

Read More

Customizing LLMs: Parameter-Efficient Fine-Tuning

Parameter-Efficient Fine-Tuning or PEFT is a more efficient approach to adapting large language models (LLMs) compared to traditional full fine-tuning. Instead of modifying the entire model, PEFT focuses on fine-tuning only a small subset of the model’s parameters, making it less resource-intensive. This allows for faster adaptation to specific tasks while maintaining most of the pre-trained knowledge of the model, offering a cost-effective solution for improving performance on specialized tasks.

Read More

Training YOLO to Detect License Plates

The nice thing about ChatGPT and similar systems is that the complexity of AI/ML functionality is hidden behind a friendly natural language interface. This makes it easily reachable to the masses. But behind this easy to use facade is a lot of advanced functionality that involve a sequence of data processing steps called a pipeline. An AI-powered business card reader, for example, would first detect text and then recognize the individual letters within the context of the words they belong to. A license plate reader would be similar. Detection is an important process that you often need in your AI/ML projects. And that’s why we will be looking at YOLO.

Read More