LangExtract is a from builders at Google that makes it simple to show messy, unstructured textual content into clear, structured information by leveraging LLMs. Customers can present just a few few-shot examples together with a customized schema and get outcomes based mostly on that. It really works each with proprietary in addition to native LLMs (through Ollama).
A major quantity of knowledge in healthcare is unstructured, making it a really perfect space the place a device like this may be useful. Medical notes are lengthy and stuffed with abbreviations and inconsistencies. Necessary particulars similar to drug names, dosages, and particularly antagonistic drug reactions (ADRs) get buried within the textual content. Subsequently, for this text, I needed to see if LangExtract may deal with antagonistic drug response (ADR) detection in scientific notes. Extra importantly, is it efficient? Let’s discover out on this article. Notice that whereas LangExtract is an open-source mission from builders at Google, it’s not an formally supported Google product.
Only a fast be aware: I’m solely exhibiting how LangExtract works. I’m not a physician, and this isn’t medical recommendation.
▶️ Here’s a detailed Kaggle pocket book to comply with alongside.
Why ADR Extraction Issues
An Opposed Drug Response (ADR) is a dangerous, unintended outcome brought on by taking a medicine. These can vary from delicate uncomfortable side effects like nausea or dizziness to extreme outcomes which will require medical consideration.
Detecting them shortly is important for affected person security and pharmacovigilance. The problem is that in scientific notes, ADRs are buried alongside previous circumstances, lab outcomes, and different context. Consequently, detecting them is hard. Utilizing LLMs to detect ADRs is an ongoing space of analysis. Some latest works have proven that LLMs are good at elevating crimson flags however not dependable. So, ADR extraction is an effective stress check for LangExtract, because the purpose right here is to see if this library can spot the antagonistic reactions amongst different entities in scientific notes like medicines, dosages, severity, and so forth.
How LangExtract Works
Earlier than we bounce into utilization, let’s break down LangExtract’s workflow. It’s a easy three-step course of:
- Outline your extraction job by writing a transparent immediate that specifies precisely what you need to extract.
- Present just a few high-quality examples to information the mannequin in the direction of the format and degree of element you count on.
- Submit your enter textual content, select the mannequin, and let LangExtract course of it. Customers can then evaluation the outcomes, visualize them, or move them instantly into their downstream pipeline.
The official GitHub repository of the device has detailed examples spanning a number of domains, from entity extraction in Shakespeare’s Romeo & Juliet to medicine identification in scientific notes and structuring radiology experiences. Do examine them out.
Set up
First we have to set up the LangExtract library. It’s all the time a good suggestion to do that inside a digital surroundings to maintain your mission dependencies remoted.
pip set up langextract
Figuring out Opposed Drug Reactions in Medical Notes with LangExtract & Gemini
Now let’s get to our use case. For this walkthrough, I’ll use Google’s Gemini 2.5 Flash mannequin. You may additionally use Gemini Professional for extra advanced reasoning duties. You’ll must first set your API key:
export LANGEXTRACT_API_KEY="your-api-key-here"
▶️ Here’s a detailed Kaggle pocket book to comply with alongside.
Step 1: Outline the Extraction Activity
Let’s create our immediate for extracting medicines, dosages, antagonistic reactions, and actions taken. We will additionally ask for severity the place talked about.
immediate = textwrap.dedent("""
Extract medicine, dosage, antagonistic response, and motion taken from the textual content.
For every antagonistic response, embody its severity as an attribute if talked about.
Use actual textual content spans from the unique textual content. Don't paraphrase.
Return entities within the order they seem.""")

Subsequent, let’s present an instance to information the mannequin in the direction of the right format:
# 1) Outline the immediate
immediate = textwrap.dedent("""
Extract situation, medicine, dosage, antagonistic response, and motion taken from the textual content.
For every antagonistic response, embody its severity as an attribute if talked about.
Use actual textual content spans from the unique textual content. Don't paraphrase.
Return entities within the order they seem.""")
# 2) Instance
examples = [
lx.data.ExampleData(
text=(
"After taking ibuprofen 400 mg for a headache, "
"the patient developed mild stomach pain. "
"They stopped taking the medicine."
),
extractions=[
lx.data.Extraction(
extraction_class="condition",
extraction_text="headache"
),
lx.data.Extraction(
extraction_class="medication",
extraction_text="ibuprofen"
),
lx.data.Extraction(
extraction_class="dosage",
extraction_text="400 mg"
),
lx.data.Extraction(
extraction_class="adverse_reaction",
extraction_text="mild stomach pain",
attributes={"severity": "mild"}
),
lx.data.Extraction(
extraction_class="action_taken",
extraction_text="They stopped taking the medicine"
)
]
)
]
Step 2: Present the Enter and Run the Extraction
For the enter, I’m utilizing an actual scientific sentence from the ADE Corpus v2 dataset on Hugging Face.
input_text = (
"A 27-year-old man who had a historical past of bronchial bronchial asthma, "
"eosinophilic enteritis, and eosinophilic pneumonia offered with "
"fever, pores and skin eruptions, cervical lymphadenopathy, hepatosplenomegaly, "
"atypical lymphocytosis, and eosinophilia two weeks after receiving "
"trimethoprim (TMP)-sulfamethoxazole (SMX) therapy."
)
Subsequent, let’s run LangExtract with the Gemini-2.5-Flash mannequin.
outcome = lx.extract(
text_or_documents=input_text,
prompt_description=immediate,
examples=examples,
model_id="gemini-2.5-flash",
api_key=LANGEXTRACT_API_KEY
)
Step 3: View the Outcomes
You’ll be able to show the extracted entities with positions
print(f"Enter: {input_text}n")
print("Extracted entities:")
for entity in outcome.extractions:
position_info = ""
if entity.char_interval:
begin, finish = entity.char_interval.start_pos, entity.char_interval.end_pos
position_info = f" (pos: {begin}-{finish})"
print(f"• {entity.extraction_class.capitalize()}: {entity.extraction_text}{position_info}")

LangExtract appropriately identifies the antagonistic drug response with out complicated it with the affected person’s pre-existing circumstances, which is a key problem in such a job.
If you wish to visualize it, it’s going to create this .jsonl file. You’ll be able to load that .jsonl file by calling the visualization perform, and it’ll create an HTML file for you.
lx.io.save_annotated_documents(
[result],
output_name="adr_extraction.jsonl",
output_dir="."
)
html_content = lx.visualize("adr_extraction.jsonl")
# Show the HTML content material instantly
show((html_content))

Working with longer scientific notes
Actual scientific notes are sometimes for much longer than the instance proven above. As an example, right here is an precise be aware from the ADE-Corpus-V2 dataset launched beneath the MIT License. You’ll be able to entry it on Hugging Face or Zenodo.

To course of longer texts with LangExtract, you retain the identical workflow however add three parameters:
extraction_passes runs a number of passes over the textual content to catch extra particulars and enhance recall.
max_workers controls parallel processing so bigger paperwork may be dealt with quicker.
max_char_buffer splits the textual content into smaller chunks, which helps the mannequin keep correct even when the enter may be very lengthy.
outcome = lx.extract(
text_or_documents=input_text,
prompt_description=immediate,
examples=examples,
model_id="gemini-2.5-flash",
extraction_passes=3,
max_workers=20,
max_char_buffer=1000
)
Right here is the output. For brevity, I’m solely exhibiting a portion of the output right here.

If you would like, you may as well move a doc’s URL on to the text_or_documents parameter.
Utilizing LangExtract with Native fashions through Ollama
LangExtract isn’t restricted to proprietary APIs. You can even run it with native fashions by means of Ollama. That is particularly helpful when working with privacy-sensitive scientific information that may’t go away your safe surroundings. You’ll be able to arrange Ollama domestically, pull your most well-liked mannequin, and level LangExtract to it. Full directions can be found within the official docs.
Conclusion
In case you’re constructing an info retrieval system or any utility involving metadata extraction, LangExtract can prevent a big quantity of preprocessing effort. In my ADR experiments, LangExtract carried out effectively, appropriately figuring out medicines, dosages, and reactions. What I seen is that the output instantly is determined by the standard of the few-shot examples supplied by the person, which implies whereas LLMs do the heavy lifting, people nonetheless stay an vital a part of the loop. The outcomes had been encouraging, however since scientific information is high-risk, broader and extra rigorous testing throughout numerous datasets continues to be wanted earlier than shifting towards manufacturing use.
