Andrejus Baranovski

Subscribe to Andrejus Baranovski feed
Blog about Oracle, Full Stack, Machine Learning and Cloud
Updated: 17 hours 20 min ago

Sparrow Parse API for PDF Invoice Data Extraction

Sun, 2024-06-23 08:42
I explain how Sparrow Parse API is integrated into Sparrow for data extraction from PDF documents, such as invoices, receipts, etc. 

 

Avoid LLM Hallucinations: Use Sparrow Parse for Tabular PDF Data, Instructor LLM for Forms

Mon, 2024-06-17 07:06
LLMs tend to hallucinate and produce incorrect results for table data extraction. For this reason in Sparrow we are using Instructor structured output for LLM to query form data and Sparrow Parse to process tabular data within the same document in combined approach. 

 

Effective Table Data Extraction from PDF without LLM

Mon, 2024-06-10 00:56
Sparrow Parse helps to read tabular data from PDFs, relying on various libraries, such as Unstructured or PyMuPDF4LLM. This allows us to avoid data hallucination errors often produced by LLMs when processing complex data structures. 

 

Instructor and Ollama for Invoice Data Extraction in Sparrow [LLM, JSON]

Mon, 2024-06-03 07:11
Structured output from invoice document, running local LLM. This works well with Instructor and Ollama.

 

Hybrid RAG with Sparrow Parse

Mon, 2024-05-27 06:02
To process complex layout docs and improve data retrieval from invoices or bank statements, we are implementing Sparrow Parse. It works in combination with LLM for form data processing. Table data is converted either into HTML or Markdown formats and extracted directly by Sparrow Parse. I explain Hybrid RAG idea in this video. 

 

Sparrow Parse - Data Processing for LLM

Mon, 2024-05-20 02:01
Data processing in LLM RAG is very important, it helps to improve data extraction results, especially for complex layout documents, with large tables. This is why I build open source Sparrow Parse library, it helps to balance between LLM and standard Python data extraction methods. 

 

Invoice Data Preprocessing for LLM

Mon, 2024-05-13 06:52
Data preprocessing is important step for LLM pipeline. I show various approaches to preprocess invoice data, before feeding it to LLM. This is quite challenging step, especially to preprocess tables. 

 

You Don't Need RAG to Extract Invoice Data

Mon, 2024-05-06 02:49
Documents like invoices or receipts can be processed by LLM directly, without RAG. I explain how you can do this locally with Ollama and Instructor. Thanks to Instructor, structured output from LLM can be validated with your own Pydantic class. 

 

LLM JSON Output with Instructor RAG and WizardLM-2

Mon, 2024-04-29 02:18
With Instructor library you can implement simple RAG without Vector DB or dependencies to other LLM libraries. The key RAG components - good data pre-processing and cleaning, powerful local LLM (such as WizardLM-2, Nous Hermes 2 PRO or Llama3) and Ollama or MLX backend.

Local RAG Explained with Unstructured and LangChain

Mon, 2024-04-22 03:01
In this tutorial, I do a code walkthrough and demonstrate how to implement the RAG pipeline using Unstructured, LangChain, and Pydantic for processing invoice data and extracting structured JSON data.

 

Local LLM RAG with Unstructured and LangChain [Structured JSON]

Mon, 2024-04-15 07:22
Using unstructured library to pre-process PDF document content, to be in a cleaner format. This helps LLM to produce more accurate response. JSON response is generated thanks to Nous Hermes 2 PRO LLM. Without any additional post-processing. Using Pydantic dynamic class to validate response to make sure it matches request. 

 

LlamaIndex Upgrade to 0.10.x Experience

Sun, 2024-03-31 09:11
I explain key points you should keep in mind when upgrading to LlamaIndex 0.10.x. 

 

LLM Structured Output for Function Calling with Ollama

Mon, 2024-03-25 09:40
I explain how function calling works with LLM. This is often confused concept, LLM doesn't call a function - LLM retuns JSON response with values to be used for function call from your environment. In this example I'm using Sparrow agent, to call a function. 

 

FastAPI File Upload and Temporary Directory for Stateless API

Sun, 2024-03-17 09:32
I explain how to handle file upload with FastAPI and how to process the file by using Python temporary directory. Files placed into temporary directory are automatically removed once request completes, this is very convenient for stateless API. 

 

Optimizing Receipt Processing with LlamaIndex and PaddleOCR

Sun, 2024-03-10 14:09
LlamaIndex Text Completion function allows to execute LLM request combining custom data and the question, without using Vector DB. This is very useful when processing output from OCR, it simplifies the RAG pipeline. In this video I explain, how OCR can be combined with LLM to process image documents in Sparrow.

 

LlamaIndex Multimodal with Ollama [Local LLM]

Sun, 2024-03-03 13:03
I describe how to run LlamaIndex Multimodal with local LlaVA LLM through Ollama. Advantage of this approach - you can process image documents with LLM directly, without running through OCR, this should lead to better results. This functionality is integrated as separate LLM agent into Sparrow. 

 

LLM Agents with Sparrow

Mon, 2024-02-26 01:53
I explain new functionality in Sparrow - LLM agents support. This means you can implement independently running agents, and invoke them from CLI or API. This makes it easier to run various LLM related processing within Sparrow. 

 

Extracting Invoice Structured Output with Haystack and Ollama Local LLM

Tue, 2024-02-20 02:49
I implemented Sparrow agent with Haystack structured output functionality to extract invoice data. This runs locally through Ollama, using LLM to retrieve key/value pairs data. 

 

Local LLM RAG Pipelines with Sparrow Plugins [Python Interface]

Sun, 2024-02-04 09:12
There are many tools and frameworks around LLM, evolving and improving daily. I added plugin support in Sparrow to run different pipelines through the same Sparrow interface. Each pipeline can be implemented with different tech (LlamaIndex, Haystack, etc.) and run independently. The main advantage is that you can test various RAG functionalities from a single app with a unified API and choose the one that works best in the specific use case. 

 

LLM Structured Output with Local Haystack RAG and Ollama

Mon, 2024-01-29 13:27
Haystack 2.0 provides functionality to process LLM output and ensure proper JSON structure, based on predefined Pydantic class. I show how you can run this on your local machine, with Ollama. This is possible thanks to OllamaGenerator class available from Haystack. 

 

Pages