[ad_1]
an LLM can see earlier than it generates a solution. This consists of the immediate itself, directions, examples, retrieved paperwork, software outputs, and even the prior dialog historical past.
Context has a big impact on reply high quality. For instance, in the event you ask an LLM to jot down a SQL question with out offering the info schema, the end result will nearly definitely be suboptimal. Worse, if the mannequin has no entry to the database in any respect, it might merely hallucinate a question that doesn’t work. Even when instruments can be found, the mannequin nonetheless wants further effort and time to deduce the schema earlier than it will possibly produce an accurate reply.
As a result of context performs such a central position in LLM-based functions, context engineering has emerged as a self-discipline centered on systematically optimising what data goes right into a mannequin’s immediate. The objective is to construct “self-improving” programs that study from expertise with out counting on costly fine-tuning (retraining fashions and updating thousands and thousands of parameters).
Context engineering comes with a number of key benefits:
- it’s cheaper and doesn’t require specialised fine-tuning experience;
- context and directions stay clear, interpretable, and simple for people to switch;
- iteration cycles are a lot sooner, since updates might be made immediately with out retraining or redeploying fashions;
- it’s extra agile, particularly when data must be forgotten for privateness or authorized causes.
With all these benefits, it’s not stunning that context engineering is gaining a lot consideration. What’s attention-grabbing, although, is how shortly the approaches themselves are evolving. On this article, I’ll stroll by means of that evolution after which experiment with one of many newer frameworks for immediate optimisation: Agentic Context Engineering (ACE).
Evolution of context engineering approaches
Context engineering didn’t seem in a single day. It has developed by means of a number of distinct levels.
The earliest stage was static prompting. Right here, prompts have been hand-crafted directions that by no means modified. A lot of the effort went into traditional immediate engineering: rigorously selecting wording, construction, and formatting to squeeze higher efficiency out of the mannequin.
The following main step was dynamic retrieval. As a substitute of counting on a hard and fast immediate, programs started pulling in related data (paperwork, examples, or details) at inference time. Retrieval-Augmented Technology (RAG) turned some of the common approaches on this class. By grounding responses in exterior information, RAG considerably improved accuracy and diminished hallucinations, particularly for knowledge-heavy duties.
Extra not too long ago, the main focus has shifted towards self-improving contexts. Somewhat than treating context as one thing that’s merely retrieved or injected, these approaches enable the system to replace and refine its personal context primarily based on previous efficiency. In different phrases, the immediate itself turns into adaptive, evolving by means of reflection and suggestions.
A lot of frameworks have emerged round this concept. Under are among the most influential ones.
- One of many earliest and most important works is “Reflexion: Language Brokers with Verbal Reinforcement Studying” by Shinn et al. This analysis launched the concept that language brokers can study from errors by means of pure language reflection fairly than gradient-based updates. Reflexion brokers analyse suggestions from earlier makes an attempt, generate verbal reflections about what went improper, and retailer these reflections in an episodic reminiscence buffer. These saved reflections then information higher decision-making in subsequent trials.
- One other necessary contribution is “TextGrad: Automated Differentiation through Textual content” by Yuksekgonul et al. TextGrad borrows ideas from deep studying optimisation (corresponding to gradients, backpropagation, and gradient descent) however replaces numerical derivatives with pure language suggestions. On this framework, LLMs generate textual critiques describing how a variable ought to change to enhance the result. These “textual gradients” are then propagated backwards by means of the system utilizing prompting, successfully performing a natural-language model of backpropagation throughout a compound AI system.
- The paper “GEPA: Reflective Immediate Evolution Can Outperform Reinforcement Studying” by Agrawal et al. takes a special angle by combining evolutionary algorithms with language-based reflection. Prompts are handled like organisms: they mutate, compete, and evolve beneath choice strain. Over time, better-performing prompts survive and propagate. This strategy is carried out in DSPy, and Hugging Face gives a sensible information for making use of it in real-world use circumstances.
- Lastly, “Dynamic Cheatsheet: Take a look at-Time Studying with Adaptive Reminiscence” by Suzgun et al. explores test-time studying by means of persistent reminiscence. On this setup, a black-box LLM is given a pocket book the place it will possibly write down helpful methods, patterns, and code snippets throughout inference. As a substitute of repeatedly rediscovering the identical insights, the mannequin accumulates and reuses information throughout duties. This adaptive reminiscence considerably improves efficiency with out requiring specific labels or human suggestions.
Agentic Context Engineering
Now that we’ve coated how context engineering has developed, let’s take a better take a look at Agentic Context Engineering (ACE), one of many more moderen approaches and the primary focus of this text. ACE is launched within the paper “Agentic Context Engineering: Evolving Contexts for Self-Bettering Language Fashions” by Zhang et al., printed in 2025.
The paper begins by figuring out two key issues with current self-improving context strategies.
- Brevity bias is the tendency for programs to oversimplify necessary particulars and regularly collapse towards quick, generic prompts. Whereas compact prompts are engaging, they typically lose the nuances that truly drive good efficiency.
- Context collapse. When programs repeatedly rewrite your entire immediate, they have a tendency to overlook helpful information accrued earlier. Over time, this results in instability and regressions fairly than regular enchancment.
To deal with these points, the authors suggest Agentic Context Engineering (ACE), a framework designed for scalable and environment friendly context adaptation in each offline settings (corresponding to system immediate optimisation) and on-line situations (like test-time reminiscence adaptation). As a substitute of compressing information right into a single static immediate, ACE permits the mannequin to constantly evolve its context by accumulating profitable methods, reflecting on failures, and organising information in a structured approach. Conceptually, it resembles an AI assistant that improves over time by maintaining detailed notes and refining its personal playbook.
On the core of ACE is an agentic studying loop that mirrors how people study by means of experimentation: attempt, replicate, and consolidate. The framework consists of three elements:
- Generator, which produces reasoning trajectories whereas fixing duties;
- Reflector, which analyses successes and failures and distils actionable insights;
- Curator, which integrates these insights into the shared context as small, incremental updates.
Somewhat than sustaining a single monolithic immediate, ACE organises context as a playbook made up of structured bullet factors. Every bullet comprises metadata (corresponding to a singular identifier and counters monitoring how typically it has been useful or dangerous) in addition to content material representing a small, reusable unit of data. This could be a normal technique, a domain-specific idea, or a typical failure mode.
The ACE workflow consists of a number of phases.
- Technology part. The Generator tackles new issues utilizing the present playbook, marking which bullets have been useful or deceptive.
- Reflection part. The Reflector analyses the total trajectory, extracting classes from each successes and failures by means of iterative refinement.
- Curation part. The Curator turns these insights into compact “delta” updates — new or modified bullets which are merged into the present playbook utilizing light-weight, non-LLM logic.
- Develop-and-refine part. New bullets are appended, current ones are up to date in place, and periodic deduplication removes redundancy utilizing semantic embeddings.
This design allows parallel processing of a number of updates and helps multi-epoch adaptation, the place the identical queries might be revisited to progressively strengthen the context over time.
Empirically, ACE delivers robust outcomes. On benchmark evaluations, it outperforms different self-improving context approaches, attaining a +10.6% enchancment on AI agent duties and a +8.6% acquire in specialised domains corresponding to finance.

Past accuracy, ACE can also be extra cost-efficient due to its incremental replace mechanism, displaying 83.6% decrease token prices in comparison with baseline strategies.
Collectively, these outcomes place ACE as a sensible and scalable step ahead in constructing self-improving LLM programs.
Utilizing ACE for banking intent information
The ACE framework seems to be promising on paper, so the following step is to see the way it performs in follow. Thankfully, the authors have shared an open-source implementation on GitHub, which provides us a stable place to begin.
Loading the information
To maintain the experiment centered, I made a decision to use ACE to a classification process. I’m utilizing a publicly accessible dataset of banking intents launched by PolyAI (). This dataset displays a quite common real-world drawback: figuring out buyer intent when somebody contacts buyer help. Correct intent classification is essential for routing requests to the correct workforce, triggering semi-automated responses, or just monitoring recurring points.
On this dataset, every buyer message (for instance, “I’m unsure why my card didn’t work”) must be mapped to a selected banking intent, corresponding to declined_card_payment. In complete, there are 77 distinct intent classes.
To maintain the experiment manageable, I sampled 500 examples from the dataset and break up them into coaching, check, and validation units. Under is the code used to load the info and create the splits.
full_df = pd.read_csv('./poly_ai_banking_data/prepare.csv')
# params
total_number_of_samples = 500
train_share = 0.5
test_share = 0.4
val_share = 0.1
sample_df = full_df.pattern(n=total_number_of_samples, random_state=42)
.reset_index(drop=True)
random.seed(42)
sample_df['group'] = random.decisions(['train', 'test', 'val'],
weights=(train_share, test_share, val_share), okay=total_number_of_samples)
train_df = sample_df[sample_df['group'] == 'prepare'].reset_index(drop=True)
test_df = sample_df[sample_df['group'] == 'check'].reset_index(drop=True)
val_df = sample_df[sample_df['group'] == 'val'].reset_index(drop=True)
Extending ACE to banking intent information
The following step is to increase the ACE framework so it will possibly work with our banking intent dataset. Thankfully, the authors present a detailed information that makes this course of comparatively simple.
Along with plugging within the new dataset, I made a few small modifications to the core framework to help Anthropic fashions and configurable temperature settings. You could find the whole, modified model of the code on GitHub.
Getting ready the info
The very first thing we have to do is put together the dataset in a format that ACE expects. I saved the coaching, validation, and check splits as CSV recordsdata beneath banking/information. Every instance comprises:
textual content: the shopper help message,class: the goal intent label we need to predict,group: an auxiliary area indicating whether or not the instance belongs to the prepare, check, or validation set.
The group area received’t be used later by the framework itself, however it’s handy for dataset administration and reproducibility.
Right here’s what the info format seems to be like.
textual content,class,group
Is it potential for me to vary my PIN quantity?,change_pin,check
What's the $1 transaction on my account?,extra_charge_on_statement,check
How a lot does high up charges price?,top_up_by_card_charge,check
I stay within the EU - can I get a card?,country_support,check
Subsequent, we have to inform ACE the place to search out every break up. That is accomplished by specifying dataset paths in banking/information/task_config.json.
{
"banking": {
"train_data": "./banking/information/prepare.csv",
"val_data": "./banking/information/val.csv",
"test_data": "./banking/information/check.csv"
}
}
Implementing the DataProcessor
To combine a brand new process, the framework requires a customized DataProcessor module. In keeping with the information, this entails implementing three core strategies: process_task_data, answer_is_correct and evaluate_accuracy.
As well as, we want a helper operate to load the uncooked information from disk. Let’s begin with that.
Under is the implementation of the data-loading operate. It reads a CSV file, validates its existence, and converts every row right into a dictionary that the remainder of the pipeline can work with.
def load_data(data_path: str) -> Listing[Dict[str, Any]]:
"""
Load and course of information from a CSV file.
Anticipated CSV format: textual content,class,group (with header)
Args:
data_path: Path to the CSV file
Returns:
Listing of dictionaries containing the info
"""
if not os.path.exists(data_path):
elevate FileNotFoundError(f"Information file not discovered: {data_path}")
information = []
with open(data_path, 'r', encoding='utf-8') as f:
reader = csv.DictReader(f)
for row in reader:
information.append({
'textual content': row['text'],
'class': row['category'],
'group': row.get('group', '')
})
print(f"Loaded {len(information)} samples from {data_path}")
return information
With the data-loading operate in place, we are able to transfer on to implementing the remaining DataProcessor strategies.
The primary function of process_task_data is to transform the uncooked dataset into ACE’s standardised enter format.
ACE expects every instance to comprise three fields: context, query, and goal. In our case, the mapping is pretty easy. We map the intent class on to goal, and we depart context empty since there’s no further background data wanted for classification.
An important half right here is the query. We added further context to make it clear to the LLM that it ought to classify the question fairly than reply questions instantly, whereas additionally offering the checklist of obtainable subjects to information an LLM’s response.
def process_task_data(self, raw_data: Listing[Dict]) -> Listing[Dict]:
"""
Convert uncooked CSV information into standardized format for ACE.
Args:
raw_data: Uncooked information loaded from CSV (checklist of dicts with 'textual content', 'class')
Returns:
Listing of dicts with keys: 'context', 'query', 'goal'
"""
processed_data = []
# Collect the checklist of subjects to incorporate into the query
topics_list = ", ".be a part of(self.allowed_topics)
for merchandise in raw_data:
customer_query = merchandise.get('textual content', '')
ground_truth_topic = merchandise.get('class', '')
# The query gives the classification process instruction
query = (
f"Classify the next banking buyer help question into one of many predefined subjects.nn"
f"Buyer Question: {customer_query}nn"
f"Accessible Matters: {topics_list}nn"
f"Reply with ONLY the subject title, nothing else."
)
processed_item = {
"context": "", # No further context wanted
"query": query,
"goal": ground_truth_topic,
"others": {
"original_text": customer_query,
"process": self.task_name,
}
}
processed_data.append(processed_item)
return processed_data
The following methodology, answer_is_correct, checks whether or not a mannequin’s prediction matches the bottom fact label. Since we explicitly instruct the LLM to reply with solely the class title, a easy case-insensitive string comparability is enough right here.
def answer_is_correct(self, predicted: str, ground_truth: str) -> bool:
"""
Test if the anticipated matter matches the bottom fact.
Makes use of easy case-insensitive comparability.
Args:
predicted: Mannequin's predicted matter
ground_truth: Floor fact matter
Returns:
bool: True if prediction is appropriate, False in any other case
"""
return predicted.decrease().strip() == ground_truth.decrease().strip()
The ultimate methodology we have to implement is evaluate_accuracy, which computes total classification accuracy throughout a number of predictions. There’s nothing fancy occurring right here. We merely calculate the fraction of circumstances the place answer_is_correct(prediction, ground_truth) returns True.
def evaluate_accuracy(self, predictions: Listing[str], ground_truths: Listing[str]) -> float:
"""
Calculate classification accuracy throughout a number of predictions.
Args:
predictions: Listing of mannequin predictions
ground_truths: Listing of floor fact subjects
Returns:
Accuracy as a float between 0 and 1
"""
if len(predictions) != len(ground_truths):
elevate ValueError("Predictions and floor truths should have identical size")
if not predictions:
return 0.0
appropriate = sum(
1 for pred, fact in zip(predictions, ground_truths)
if self.answer_is_correct(pred, fact)
)
return appropriate / len(predictions)
Placing collectively the workflow script
With the DataProcessor in place, the following step is to assemble a complete run script for ACE. I created a run_ace_workflow script that accepts a number of key arguments:
api_providerselects the language mannequin API to make use of ('anthropic','openai','collectively', or'sambanova'), defaulting to'anthropic'.generator_modelspecifies the mannequin for the Generator agent (default:'claude-haiku-4-5').reflector_modelspecifies the mannequin for the Reflector agent (default:'claude-sonnet-4-5').curator_modelspecifies the mannequin for the Curator agent (default:'claude-sonnet-4-5').max_trainandmax_testare optionally available limits on the prepare and check set sizes, helpful for fast experiments or debugging.
Let’s talk about how this script truly works. The script begins by loading the banking intent information and initialising the DataProcessor. Right here’s the helper operate I wrote for this.
def load_banking_data(max_train=None, max_test=None):
"""Load and course of banking dataset."""
from banking.data_processor import DataProcessor, load_data
base_path = os.path.dirname(__file__)
data_path = os.path.be a part of(base_path, "information")
# Load uncooked information
train_raw = load_data(os.path.be a part of(data_path, "prepare.csv"))
val_raw = load_data(os.path.be a part of(data_path, "val.csv"))
test_raw = load_data(os.path.be a part of(data_path, "check.csv"))
# Restrict samples if specified
if max_train:
train_raw = train_raw[:max_train]
val_raw = val_raw[:max(max_train // 4, 10)]
if max_test:
test_raw = test_raw[:max_test]
# Course of information
processor = DataProcessor(task_name="banking")
train_samples = processor.process_task_data(train_raw)
val_samples = processor.process_task_data(val_raw)
test_samples = processor.process_task_data(test_raw)
return train_samples, val_samples, test_samples, processor
train_samples, val_samples, test_samples, processor = load_banking_data(
max_train=args.max_train,
max_test=args.max_test
)
The following step is to outline a playbook template. That is necessary as a result of the present ACE implementation can’t dynamically create new sections, so we predefine the construction to information the mannequin. Right here’s the template I used for the banking area.
BANKING_PLAYBOOK_TEMPLATE = """
## GENERAL
## CLASSIFICATION PRINCIPLES
## CATEGORY DISAMBIGUATION
## BANKING DOMAIN KNOWLEDGE
## COMMON PATTERNS
## HANDLING AMBIGUOUS QUERIES
## COMMON MISTAKES TO AVOID
## OTHERS
"""
With the info and template prepared, we are able to initialise the ACE object with the primary parameters.
ace_system = ACE(
api_provider=args.api_provider,
generator_model=args.generator_model,
reflector_model=args.reflector_model,
curator_model=args.curator_model,
max_tokens=4096,
initial_playbook=BANKING_PLAYBOOK_TEMPLATE,
use_bulletpoint_analyzer=True, # enabling deduplication of bullet factors within the playbook
generator_temperature=0.1, # prioritising consistency for generator
reflector_temperature=0.7, # prioritising creativity for reflector and curator
curator_temperature=0.7,
)
Lastly, we outline a operate to run the ACE coaching workflow, which incorporates preliminary analysis, iterative reflection, curation, and ultimate analysis.
def run_ace_training(ace_system, train_samples, val_samples, test_samples, processor, results_dir):
"""Practice ACE to enhance the playbook (consists of preliminary and ultimate evaluations)."""
config = {
'num_epochs': 1,
'max_num_rounds': 3, # max reflection rounds per pattern
'curator_frequency': 5, # run curator each 5 steps
'eval_steps': max(len(train_samples) // 10, 10), # consider 10 occasions throughout coaching
'save_steps': max(len(train_samples) // 10, 10),
'playbook_token_budget': 80000,
'task_name': 'banking_ace',
'json_mode': False,
'no_ground_truth': False,
'save_dir': os.path.be a part of(results_dir, "coaching"),
'test_workers': 10,
}
outcomes = ace_system.run(
mode='offline',
train_samples=train_samples,
val_samples=val_samples,
test_samples=test_samples,
data_processor=processor,
config=config
)
# Extract outcomes
initial_acc = outcomes.get('initial_test_results', {}).get('accuracy', 0)
final_acc = outcomes.get('final_test_results', {}).get('accuracy', 0)
training_results = outcomes.get('training_results', {})
return ace_system.best_playbook, outcomes
best_playbook, training_results = run_ace_training(
ace_system, train_samples, val_samples, test_samples,
processor, results_dir
)
And that’s it! That’s all of the core logic we have to run ACE. I’ve added some logging on high of the workflow for comfort, however it’s not important to the primary performance.
Outcomes
Let’s check out the outcomes and see how every little thing comes collectively. First, try the most effective playbook, which yow will discover at outcomes/banking_{dt}/best_playbook.txt. The playbook is organised into itemised bullets, grouped in response to the classes we outlined in our preliminary template. Every bullet comprises detailed directions and techniques, together with metadata displaying how typically it was marked useful or dangerous. This construction makes it simple to see which subjects and techniques the system discovered most helpful throughout coaching.
## GENERAL
## CLASSIFICATION PRINCIPLES
[cls-00001] useful=1 dangerous=0 :: Temporal indicators like 'was in a position to earlier than', 'labored beforehand', or 'used to work' are robust indicators that the problem is restricted to the present transaction fairly than a normal system functionality drawback. These phrases recommend a change in standing for a selected entity (beneficiary, card, account) fairly than total performance.
[cls-00002] useful=18 dangerous=4 :: Apply specificity hierarchy: when a number of classes might apply, select essentially the most particular one which matches the contextual clues. For instance, beneficiary_not_allowed (particular to recipient) is extra particular than declined_transfer (normal failure).
[cls-00009] useful=0 dangerous=3 :: Specificity hierarchy works bidirectionally: select particular classes when contextual clues level to a specific transaction sort, however use normal classes (like 'extra_charge_on_statement') when the question lacks enough context to find out the particular nature of the transaction. Do not pressure specificity when the shopper's question is inherently normal.
[cls-00017] useful=5 dangerous=1 :: Course of-oriented vs Standing-tracking distinction: Differentiate between questions on HOW to acquire/purchase one thing (process-oriented) versus questions on WHEN one thing will arrive or WHETHER it has arrived (status-tracking). Course of questions concentrate on the steps and elements wanted, whereas standing questions concentrate on timing and supply affirmation. Use this distinction to decide on between acquisition classes and monitoring/arrival classes.
## CATEGORY DISAMBIGUATION
[dis-00003] useful=1 dangerous=0 :: declined_transfer vs beneficiary_not_allowed: If the shopper mentions they may switch earlier than however immediately can not, this strongly signifies beneficiary_not_allowed (recipient is blocked/restricted) fairly than declined_transfer (normal switch failure as a consequence of funds, limits, or system errors).
[dis-00011] useful=11 dangerous=0 :: pending_* vs failed_* vs declined_*: Transaction state is essential for classification. 'Hasn't gone by means of but' or 'taking too lengthy' = pending state. 'Did not work', 'was declined', or 'was rejected' = failed/declined state. 'Cash got here again' or 'was returned' = reverted state. Match the class to the precise transaction state described.
[dis-00012] useful=0 dangerous=1 :: country_support vs supported_cards_and_currencies: Queries about geographic availability ('which nations', 'the place can I', 'what areas') needs to be categorised as 'country_support'. In distinction, 'supported_cards_and_currencies' is for questions on card sorts (Visa, Mastercard) and forex choices, not geographic availability.
[dis-00014] useful=2 dangerous=0 :: Money withdrawal points: Distinguish by transaction state and final result: 'pending_cash_withdrawal' (not accomplished but, nonetheless processing), 'declined_cash_withdrawal' (rejected, no money acquired), 'cash_withdrawal_not_recognised' (buyer does not recall the transaction), and 'wrong_amount_of_cash_received' (transaction accomplished however incorrect quantity distributed). If money was acquired however the quantity was improper, use essentially the most particular class: wrong_amount_of_cash_received.
[dis-00015] useful=3 dangerous=3 :: card_arrival vs get_physical_card: Distinguish between status-tracking questions (card_arrival) and process-acquisition questions (get_physical_card). 'card_arrival' is for monitoring current orders ('Has my card arrived?', 'The place is my card?'). 'get_physical_card' encompasses your entire means of acquiring a bodily card together with all elements like PIN ('The place can I discover my PIN?', 'How do I get my card and PIN?'). Questions on lacking PINs with 'have not gotten it but' point out the shopper is within the acquisition course of, not simply monitoring supply.
[dis-00021] useful=1 dangerous=0 :: card_payment_not_recognised vs extra_charge_on_statement: When a buyer mentions a 'fee' they do not acknowledge or did not make ('fee I by no means submitted', 'fee I did not authorize'), classify as 'card_payment_not_recognised' as a result of 'fee' is a selected transaction sort. Use 'extra_charge_on_statement' solely when the shopper describes sudden quantities, charges, or prices WITHOUT specifying the transaction sort (e.g., 'I see an additional $5 on my assertion', 'there is a unusual cost' with out mentioning fee/switch/withdrawal).
[dis-00024] useful=0 dangerous=1 :: Payment/cost class specificity: When clients ask about charges or prices, prioritize transaction-type-specific payment classes over 'extra_charge_on_statement'. If the question mentions a selected transaction sort (switch, fee, withdrawal, top-up), use the corresponding particular payment class: 'transfer_fee_charged' for switch charges, 'card_payment_fee_charged' for fee charges, 'atm_fee_charged' for withdrawal charges, 'top_up_fee' for top-up charges. Reserve 'extra_charge_on_statement' just for payment queries the place no particular transaction sort is talked about (e.g., 'Why is there an additional $5 cost?' with out context).
[dis-00026] useful=0 dangerous=0 :: receiving_money vs transfer_into_account: Distinguish between passive receipt and lively switch. 'receiving_money' is for queries about receiving funds FROM one other social gathering (passive, initiated by sender). 'transfer_into_account' is for queries in regards to the buyer initiating a switch TO add funds to their very own account (lively, self-initiated). Context clues: empty/low stability + asking about transfers = possible transfer_into_account. Questions on 'can I switch funds' within the context of needing so as to add cash = transfer_into_account, not receiving_money.
[dis-00029] useful=0 dangerous=0 :: beneficiary_not_allowed vs declined_transfer: When a question explicitly mentions 'beneficiary' or 'recipient' mixed with restriction language ('not allowed', 'blocked', 'restricted', 'can not add', 'unable so as to add'), classify as 'beneficiary_not_allowed' even with out temporal indicators. The mixture of the particular banking entity time period (beneficiary/recipient) with restriction language is a robust direct sign for recipient-level restrictions fairly than normal switch failures.
## BANKING DOMAIN KNOWLEDGE
[bank-00006] useful=0 dangerous=0 :: In banking, when a beforehand profitable switch immediately fails, widespread causes embody: beneficiary being flagged/blocked by fraud programs, beneficiary account restrictions, or beneficiary being faraway from allowed checklist. These are distinct from normal switch declines as a consequence of inadequate funds or system errors.
[bank-00008] useful=0 dangerous=6 :: Small sudden quantities (like £1, £0.01) showing on statements typically point out authorization holds, verification prices, or miscellaneous charges. When clients query these with out further context, they need to be categorised as 'extra_charge_on_statement' fairly than extra particular transaction sorts.
[bank-00018] useful=0 dangerous=0 :: 'card_swallowed' is the banking trade time period for ATM card retention situations the place the machine retains/retains the shopper's card. This is applicable when playing cards are caught, will not come out, or are held by the ATM, whatever the particular phrasing utilized by the shopper.
[bank-00020] useful=10 dangerous=4 :: Banking terminology has a specificity hierarchy for transaction references. Particular transaction sort key phrases embody: 'fee' (card funds), 'switch' (cash transfers), 'withdrawal' (money withdrawals), 'top-up' (account funding), 'direct debit', 'standing order'. Generic phrases embody: 'cost', 'quantity', 'transaction', 'payment'. When a buyer makes use of a selected transaction sort key phrase, it gives enough context to categorise into transaction-type-specific classes fairly than normal classes.
## COMMON PATTERNS
[pat-00004] useful=0 dangerous=0 :: Sample: 'It labored earlier than, now it does not' + switch context = possible beneficiary-level restriction fairly than system-level decline. The earlier success signifies the account and switch mechanism are practical, pointing to a selected restriction on the present recipient.
[pat-00007] useful=3 dangerous=6 :: Sample: Buyer describes transaction as 'unusual', 'sudden', 'unexplained', or asks 'what is that this cost' on their assertion with out offering particular transaction sort context (switch, fee, withdrawal, and so forth.) = classify as 'extra_charge_on_statement'. That is the suitable normal class when the character of the cost is unclear.
[pat-00010] useful=8 dangerous=1 :: Sample: Phrases like 'hasn't gone by means of but', 'nonetheless ready', 'not accomplished', or 'nonetheless pending' point out a transaction in PENDING state, not a FAILED state. Select 'pending_*' classes over 'failed_*' or 'declined_*' classes when these language cues are current.
[pat-00013] useful=0 dangerous=2 :: Sample: Questions with geographic scope indicators like 'which nations', 'the place can I', 'what areas', or 'in what areas' are asking about service availability by geography = classify as 'country_support'. The core intent is knowing geographic attain of providers.
[pat-00016] useful=2 dangerous=9 :: Sample: 'The place can I discover' or 'How do I get' phrasing signifies process-oriented questions in search of details about acquiring or buying one thing, not status-tracking questions. These ought to usually map to acquisition/setup classes (like 'get_physical_card') fairly than supply/monitoring classes (like 'card_arrival' or 'card_delivery_estimate').
[pat-00019] useful=0 dangerous=0 :: Sample: Phrases indicating a card is bodily retained by an ATM ('card caught in ATM', 'card will not come out', 'ATM saved my card', 'get my card out of ATM', 'retrieve card from machine') needs to be categorised as 'card_swallowed'. The important thing indicator is the cardboard being bodily held/retained by the machine fairly than different card points like harm, loss, or performance issues.
[pat-00022] useful=1 dangerous=0 :: Sample: Particular transaction sort key phrase + 'not acknowledged'/'did not make'/'by no means submitted' = use transaction-type-specific 'not_recognised' class. Examples: 'fee I did not make' → card_payment_not_recognised; 'switch I do not acknowledge' → transfer_not_received_by_recipient or associated switch subject; 'withdrawal I by no means made' → cash_withdrawal_not_recognised. The presence of a selected transaction sort key phrase (fee, switch, withdrawal) is enough context to keep away from normal classes.
[pat-00025] useful=1 dangerous=0 :: Sample: Transaction sort key phrase + timing query ('how lengthy', 'when will', 'how a lot time') + geographic point out = prioritize transaction-specific timing class (e.g., 'transfer_timing', 'card_delivery_estimate'). Deal with geographic mentions as contextual details about the transaction origin/vacation spot except the question explicitly asks about service availability ('which nations', 'the place can I take advantage of', 'is it accessible in'). Instance: 'switch from China, how lengthy?' → 'transfer_timing' (not 'country_support').
[pat-00027] useful=0 dangerous=0 :: Sample: Account stability context + switch inquiry = intent so as to add funds. When a buyer mentions their account is empty/has no funds/wants cash AND asks about transferring, they're asking about shifting funds INTO their account (transfer_into_account), not about receiving cash from others (receiving_money). The account state gives essential context for disambiguating transfer-related intents.
## HANDLING AMBIGUOUS QUERIES
## COMMON MISTAKES TO AVOID
[err-00005] useful=2 dangerous=0 :: Do not default to normal classes (like declined_transfer) when temporal context ('was in a position to earlier than') suggests a extra particular subject. The temporal change is a key discriminator that always factors to entity-specific restrictions (beneficiary, card, account) fairly than normal failures.
[err-00023] useful=2 dangerous=0 :: Do not default to 'extra_charge_on_statement' when the shopper mentions a selected transaction sort (fee, switch, withdrawal, top-up) they do not acknowledge. 'extra_charge_on_statement' needs to be reserved for actually ambiguous circumstances the place no transaction sort is specified. When a buyer says 'fee I by no means made', the phrase 'fee' gives enough context to make use of 'card_payment_not_recognised' as a substitute of the generic 'extra_charge_on_statement'.
[err-00028] useful=0 dangerous=0 :: Do not apply sample guidelines or area information which are irrelevant to the question. If a question has no geographic indicators, do not apply geographic patterns. If there is no point out of charges, do not apply fee-related guidelines. Give attention to guidelines that instantly match the semantic content material and context of the shopper's question fairly than greedy for any relevant rule. Irrelevant rule utility results in misclassification.
## OTHERS
For a deeper take a look at how every agent operates, you may discover the detailed execution logs at outcomes/banking_{dt}/coaching/ace_run_{dt}/detailed_llm_logs . I extremely advocate searching these logs. On the very least, skim by means of the prompts and see how the Generator, Reflector, and Curator work together. It’s an effective way to know how ACE evolves the context step-by-step.
After all, essentially the most attention-grabbing metric is accuracy. You could find the preliminary and ultimate check ends in outcomes/banking_{datetime}/coaching/initial_test_results.json and outcomes/banking_{datetime}/coaching/final_test_results.json.
# preliminary outcomes
{
"test_results": {
"accuracy": 0.7512437810945274,
"appropriate": 151,
"complete": 201,
"no_answer": 0
},
"error_log": {
"accuracy": 0.7512437810945274,
"errors": [
{
"index": 2,
"prediction": "declined_card_payment",
"ground_truth": "declined_transfer"
},
{
"index": 9,
"prediction": "top_up_limits",
"ground_truth": "automatic_top_up"
},
{
"index": 7,
"prediction": "transfer_not_received_by_recipient",
"ground_truth": "balance_not_updated_after_cheque_or_cash_deposit"
},
...
]
}
}
# ultimate outcomes
{
"test_results": {
"accuracy": 0.736318407960199,
"appropriate": 148,
"complete": 201,
"no_answer": 0
},
"error_log": {
"accuracy": 0.736318407960199,
"errors": [
{
"index": 9,
"prediction": "top_up_limits",
"ground_truth": "automatic_top_up"
},
{
"index": 2,
"prediction": "declined_card_payment",
"ground_truth": "declined_transfer"
},
{
"index": 7,
"prediction": "pending_transfer",
"ground_truth": "balance_not_updated_after_cheque_or_cash_deposit"
},
...
]
}
}
The outcomes, admittedly, are usually not very spectacular. In actual fact, accuracy barely dropped after optimisation, from 75.1% to 73.6%. However even adverse outcomes can educate us one thing invaluable.
There are a couple of possible the explanation why ACE didn’t present a lot profit on this case:
- Restricted information per class. We solely had 248 coaching examples, 201 check examples, and 51 validation examples. Nevertheless, our process concerned 77 completely different classes. With so few examples per class, the mannequin merely might not have had sufficient information to study significant distinctions.
- Small and unrepresentative validation set. With solely 51 examples, the validation set may not have captured the total range of buyer queries, making it troublesome for ACE to generate helpful reflections and enhancements.
- Job complexity. Our use case is comparatively simple. Because the authors observe, ACE tends to shine in situations with massive quantities of extremely specialised area information or extra complicated agentic workflows, the place reflection and iterative context refinement can considerably enhance efficiency.
Utilizing ACE for code era
Inspired by the earlier experiment, I made a decision to present ACE one other attempt. This time on the Principally Fundamental Python Issues dataset (accessible beneath cc-by-4.0 license). Hopefully, the outcomes can be extra promising with a code era process.
Information overview
Every instance within the dataset comprises three key elements:
- Query, for instance, “Write a operate to reverse phrases in a given string.”
- Floor fact implementation — Python reference code. For instance, for the query above
def reverse_words(s):
return ' '.be a part of(reversed(s.break up()))
- Take a look at circumstances are assertions to validate the generated code, corresponding to
[
assert reverse_words("python program")==("program python"),
assert reverse_words("java language")==("language java"),
assert reverse_words("indian man")==("man indian")
]
Including a brand new process to the ACE framework
We are able to observe related steps to increase the ACE framework to deal with coding duties. I received’t go into all of the implementation particulars right here, since yow will discover the total code on GitHub. Nevertheless, it’s value highlighting the important thing variations in comparison with the banking intent instance.
Coding duties are inherently extra complicated. Within the banking intent case, the mannequin outputs a single class out of 77, which is simple to check instantly with the bottom fact. In code era, nevertheless, the LLM can produce arbitrary code, so we can not merely examine for precise matches. As a substitute, we have to run exams to find out whether or not the generated answer is appropriate.
# banking
def answer_is_correct(self, predicted: str, ground_truth: str) -> bool:
return predicted.decrease() == ground_truth.decrease()
# coding
def answer_is_correct(self, predicted: str, ground_truth: str,
test_list: Listing[str], idx: int, save_dir: str) -> bool:
code = extract_code_from_response(predicted)
end result = execute_code_with_tests(code, test_list, timeout=5)
return end result['success']
Due to this added complexity, I needed to implement a number of enhancements within the DataProcessor for code era:
- Code extraction. LLMs typically embody further context across the code, corresponding to Markdown formatting (
```python ...```). We have to clear and extract the code to make sure it will possibly compile appropriately. - Protected execution. Since we run the generated code to confirm correctness, it’s necessary to implement fundamental security measures, corresponding to timeouts and remoted execution environments.
- Offering full context. It’s essential to incorporate all mandatory data within the
query. If we simply ask the LLM to generate code, it’s unlikely to move the exams as a result of it received’t be clear what operate title or signature is predicted. That’s why it’s essential to supply all mandatory particulars within thequerywhen standardising the info within theprocess_task_dataoperate.
query = (
f"Write a Python operate to resolve the next drawback:nn"
f"Downside: {problem_text}nn"
f"Your code should move the next check circumstances:n"
f"{test_cases_formatted}nn"
f"Essential: The check circumstances will likely be executed in opposition to your code. "
f"Be sure that your operate title and signature match what the exams count on.nn"
f"Reply with ONLY the Python code, no explanations."
)
Within the authentic ACE implementation, the Reflector in contrast generated code instantly with the bottom fact, which works for classification duties. For coding, nevertheless, this strategy doesn’t make sense: a number of appropriate options can exist, and optimising for code that “seems to be related” to the reference doesn’t assure it can move the exams.
To deal with this, I carried out a brand new methodology, get_test_feedback, which gives the Reflector with precise check execution outcomes and error messages. The check output turns into the first sign for correctness, giving far more informative suggestions than easy code comparability.
def get_test_feedback(self, predicted: str, ground_truth: str, test_list: Listing[str] = None) -> str:
"""
Get detailed check execution suggestions for the reflector.
This methodology gives the reflector with precise check outcomes and error messages,
which is extra informative than simply evaluating generated code with floor fact.
The check output is the first sign for correctness in code era duties.
Args:
predicted: Mannequin's predicted code
ground_truth: Floor fact code (reference solely, not used for analysis)
test_list: Listing of check assertions to run
Returns:
str: Detailed suggestions string with check execution outcomes
"""
if test_list is None:
return "No check circumstances offered - can not consider code."
# Extract code from response if wanted
code = extract_code_from_response(predicted)
# Execute code with exams
end result = execute_code_with_tests(code, test_list, timeout=self.timeout)
# Construct detailed suggestions
feedback_parts = []
if end result['success']:
feedback_parts.append(f"✓ All {end result['total']} exams PASSED")
feedback_parts.append("nTest circumstances executed efficiently:")
for i, check in enumerate(test_list, 1):
feedback_parts.append(f" {i}. {check} ✓")
else:
feedback_parts.append(f"✗ Exams FAILED: {end result['passed']}/{end result['total']} exams handed")
if end result['timeout']:
feedback_parts.append("n⏱ TIMEOUT: Code execution exceeded time restrict")
if end result['errors']:
feedback_parts.append("n--- ERROR DETAILS ---")
for error in end result['errors']:
feedback_parts.append(f" • {error}")
# Present which exams handed vs failed
feedback_parts.append("n--- TEST RESULTS ---")
for i, check in enumerate(test_list, 1):
# Test if this particular check seems in errors
test_failed = any(f"Take a look at {i}" in err for err in end result.get('errors', []))
standing = "✗ FAILED" if test_failed else "✓ handed"
feedback_parts.append(f" {i}. {check} - {standing}")
# Add extracted code for reference
feedback_parts.append("n--- EXTRACTED CODE ---")
feedback_parts.append(code)
return "n".be a part of(feedback_parts)
Alongside this new methodology, I created a devoted Reflector immediate tailor-made for code era. Its focus is on check outcomes, not line-by-line code comparability.
You're an skilled code reviewer and educator. Your job is to investigate why generated code handed or failed check circumstances, and determine patterns that result in appropriate or incorrect options.
**IMPORTANT: Take a look at execution outcomes are the PRIMARY sign for correctness.**
- The code is appropriate if and provided that ALL exams move
- Do NOT evaluate implementations line-by-line with the reference - completely different implementations might be equally appropriate
- Give attention to understanding WHY exams handed or failed primarily based on the code's logic
**Directions:**
- First, look at the Take a look at Execution Outcomes to find out if the code is appropriate
- If exams FAILED: Analyze what triggered the failure (syntax errors, logic errors, edge circumstances, improper algorithm)
- If exams PASSED: Determine what the mannequin did effectively that led to success
- The "Doable Implementation" is simply ONE technique to remedy the issue - the mannequin's strategy could also be completely different however equally legitimate
- Present actionable insights for enhancing code era sooner or later
- Tag bulletpoints as useful/dangerous/impartial primarily based on whether or not they contributed to passing exams
Your output needs to be a json object, which comprises the next fields:
- reasoning: analyze the check outcomes and the code's logic, clarify why exams handed/failed
- error_identification: if exams failed, what particular subject triggered the failure? If exams handed, state "No errors - all exams handed"
- root_cause_analysis: what underlying idea or sample led to success or failure?
- correct_approach: what coding technique or sample needs to be used for related issues?
- key_insight: what precept needs to be remembered for future code era duties?
- bullet_tags: an inventory of json objects with bullet_id and tag for every bulletpoint
**Query:**
{}
**Mannequin's Reasoning Hint:**
{}
**Mannequin's Generated Code:**
{}
**Doable Implementation (Reference Solely - NOT the one appropriate answer):**
{}
**Take a look at Execution Outcomes (PRIMARY SIGNAL):**
{}
**A part of Playbook that is utilized by the generator to reply the query:**
{}
**Reply on this precise JSON format:**
{{
"reasoning": "[Analyze test results and code logic - why did tests pass or fail?]",
"error_identification": "[What caused test failures? Or 'No errors - all tests passed']",
"root_cause_analysis": "[What concept/pattern led to success or failure?]",
"correct_approach": "[What coding strategy works for this type of problem?]",
"key_insight": "[What principle should be remembered for future code generation?]",
"bullet_tags": [
{{"id": "code-00001", "tag": "helpful"}},
{{"id": "code-00002", "tag": "harmful"}}
]
}}
This coding-specific Reflector is mechanically used every time the duty title comprises "coding".
Outcomes
Lastly, I ran the immediate optimisation course of on a dataset of 500 samples, break up into prepare, check, and validation units. This time, the outcomes are far more promising: accuracy improved considerably from 71.1% to 87.1%. On this case, ACE clearly helped optimise the prompts and information the mannequin towards appropriate options.
the most effective playbook, it’s fairly in depth. Most of the most useful patterns are normal rules, corresponding to:
- Write the best appropriate, Pythonic answer first,
- Deal with check circumstances because the true specification,
- Confirm correctness earlier than any additional optimisation.
On the identical time, the playbook additionally consists of very particular steerage, for instance, detailed directions for duties like GCD calculations.
Total, this reveals that ACE can successfully seize each high-level methods and task-specific ideas.
## GENERAL
## COMMON MISTAKES TO AVOID
[err-00003] useful=5 dangerous=0 :: Do not add pointless complexity to recursive algorithms. For instance, in GCD implementations, specific min/max logic or particular circumstances for checking if a worth equals 1 are redundant when utilizing the usual Euclidean algorithm.
[err-00007] useful=0 dangerous=0 :: Do not assume drawback constraints match your algorithm's mathematical conditions. For instance, Fermat's Little Theorem for modular inverse requires a PRIME modulus - confirm the issue ensures this earlier than utilizing pow(a, p-2, p). If constraints aren't specified, select extra normal algorithms.
## OTHERS
## CODE GENERATION PRINCIPLES
[cgp-00002] useful=41 dangerous=2 :: Choose minimal, mathematically sound implementations over complicated ones. Keep away from including pointless preprocessing logic (like min/max) or particular case checks when the core algorithm naturally handles all situations.
[cgp-00012] useful=91 dangerous=2 :: At all times guarantee generated code is syntactically full earlier than finalizing output. Confirm all opened brackets, braces, and parentheses are correctly closed, and all statements are absolutely fashioned. Incomplete code era (truncation mid-statement) causes syntax errors that forestall execution no matter algorithmic correctness.
[cgp-00020] useful=6 dangerous=0 :: When an issue explicitly requires utilizing lambda capabilities, combine them naturally with Python's practical programming instruments (map, filter, cut back, sorted with key parameter). Do not pressure lambda utilization the place it is awkward - these built-in capabilities are designed to work seamlessly with lambdas for operations like filtering, transformation, and counting.
[cgp-00024] useful=140 dangerous=2 :: Prioritize readable, Pythonic options utilizing built-in capabilities over performance-optimized complicated algorithms except the issue explicitly requires optimization or entails large-scale information. A transparent answer utilizing bin(), str strategies, or checklist comprehensions is usually preferable to bit manipulation or guide loops. Optimize solely when mandatory.
[cgp-00047] useful=56 dangerous=2 :: Observe a correctness-first growth technique: (1) implement the simple algorithm that appropriately solves the issue, even when it isn't optimally environment friendly, (2) confirm correctness with check circumstances, (3) solely then take into account optimization if efficiency is insufficient or the issue explicitly requires it. An accurate O(n) answer is infinitely higher than a buggy O(log n) try. Untimely optimization typically introduces errors in logic, particularly for mathematical or algorithmic issues.
[cgp-00050] useful=0 dangerous=0 :: When a number of algorithmically appropriate options exist, choose the one with higher time/house complexity. An accurate O(1) formula-based answer is superior to an accurate O(n) iterative answer. Nevertheless, solely optimize in the event you can preserve correctness - a working O(n) answer is infinitely higher than a buggy O(1) try. Confirm the extra environment friendly strategy passes all exams earlier than committing to it.
[cgp-00053] useful=0 dangerous=0 :: When implementing mathematical optimizations (particularly for pair/mixture counting), confirm the optimized strategy in opposition to check circumstances by means of guide calculation BEFORE coding. For every check case: (1) apply your mathematical perception to foretell the output, (2) affirm it matches anticipated output, (3) solely then implement. This catches errors in mathematical reasoning early, stopping bugs which are tougher to debug in code than in arithmetic.
[cgp-00057] useful=0 dangerous=0 :: Keep away from shadowing Python built-in names (dict, checklist, str, int, set, tuple, and so forth.) when naming variables or parameters. Use descriptive alternate options as a substitute: 'd' or 'information' as a substitute of 'dict', 'lst' or 'gadgets' as a substitute of 'checklist', 's' or 'textual content' as a substitute of 'str'. Shadowing built-ins makes them inaccessible in that scope and reduces code readability, regardless that it is syntactically legitimate.
[cgp-00059] useful=2 dangerous=0 :: Embody defensive programming practices (enter validation, bounds checking, sort checking) even when not explicitly examined by seen check circumstances. For string indexing, validate index bounds earlier than entry. For numeric conversions, confirm the enter is a sound digit. For checklist operations, examine for empty collections. These safeguards enhance code robustness and stop runtime errors on edge circumstances that will exist in hidden exams, demonstrating production-quality coding practices.
[cgp-00074] useful=0 dangerous=0 :: For operations involving powers of two, choose bitwise shift operators over arithmetic operations for readability and effectivity: use left shift (1 << okay) as a substitute of two**okay or pow(2, okay) for computing 2^okay, use proper shift (n >> okay) as a substitute of n // (2**okay) for dividing by powers of two. Bitwise operators make the bit-level intent specific and are the idiomatic strategy in bit manipulation contexts. That is particularly invaluable when working with bit positions and their corresponding values.
[cgp-00081] useful=0 dangerous=0 :: Earlier than utilizing customary library mathematical constants (math.pi, math.e, and so forth.), validate that check circumstances count on full-precision values by calculating one check output and evaluating to anticipated. If anticipated outputs recommend truncated/simplified constants (pi=3.14, pi=3.1415, e=2.718), use hardcoded values matching check precision as a substitute of library constants. Sample: (1) determine mathematical fixed wanted, (2) calculate check output with customary fixed, (3) if mismatch exists, derive the fixed worth that produces precise anticipated outputs, (4) use hardcoded worth. Take a look at case expectations override mathematical purity.
## COMMON PYTHON PATTERNS
[cpp-00010] useful=23 dangerous=0 :: For locating components with most/minimal properties primarily based on a criterion, use built-in max()/min() capabilities with the important thing parameter. Instance: max(list_of_lists, key=len) finds the longest checklist. That is extra Pythonic and readable than guide iteration with comparisons.
[cpp-00013] useful=17 dangerous=0 :: For counting or looking out operations in Python collections (tuples, lists, strings), prioritize built-in strategies: use .depend() for prevalence counting, .index() for locating positions, .discover() for strings. These are extra dependable, environment friendly, and Pythonic than guide iteration with counters or loops.
[cpp-00014] useful=3 dangerous=0 :: When working with mixed-type information constructions, use isinstance() for sort checking to tell apart between completely different aspect sorts. Mix with len() checks to validate construction. Instance: isinstance(merchandise, checklist) and len(merchandise) == 2 reliably identifies 2-element lists in blended collections.
[cpp-00015] useful=3 dangerous=0 :: Use prolong() as a substitute of append() when including a number of components from a sequence to an inventory. prolong() provides components individually to the goal checklist, whereas append() would add your entire sequence as a single nested aspect. Instance: end result.prolong([value] * depend) vs end result.append([value] * depend).
[cpp-00016] useful=2 dangerous=0 :: Use checklist multiplication ([value] * depend) to effectively repeat components. That is extra Pythonic and readable than guide loops for creating repeated components. Mix with prolong() for including repeated components to current lists.
[cpp-00019] useful=2 dangerous=0 :: For counting components matching a situation with lambda capabilities, use sum(map(lambda x: 1 if situation else 0, iterable)) as a sublime various to len(checklist(filter(lambda x: situation, iterable))). The sum(map()) strategy maps components to 1/0 and sums them, typically extra readable and environment friendly than filtering then counting.
[cpp-00026] useful=14 dangerous=0 :: For changing sequences (tuples, lists) of characters/strings right into a single string, use str.be a part of() methodology: ''.be a part of(sequence) for character concatenation, or 'separator'.be a part of(sequence) for becoming a member of with delimiters. That is the idiomatic Python strategy - extra readable and performant than guide loops with += or accumulation patterns.
[cpp-00030] useful=1 dangerous=0 :: For character classification with regex, use re.findall() with mutually unique character class patterns. For 'every little thing else' classes (like particular characters), choose negation patterns [^...] over enumerating particular characters - e.g., [^A-Za-z0-9] captures all non-alphanumeric characters comprehensively, avoiding the brittleness of lists like [,.!?]. Guarantee patterns do not overlap to forestall double-counting.
[cpp-00031] useful=2 dangerous=0 :: For locating international most/minimal throughout nested iterables (checklist of tuples, checklist of lists, and so forth.), use nested generator expressions with built-in max()/min(): `max(aspect for container in containers for aspect in container)`. This sample naturally flattens one degree of nesting with out creating intermediate lists, making it splendid for locating extremes throughout tuple data or sublists. Extra environment friendly and readable than guide iteration.
[cpp-00033] useful=2 dangerous=0 :: For index-based entry to dictionary keys, use the sample checklist(dict)[index] or checklist(dict.keys())[index]. This depends on Python 3.7+ ensures that dictionaries preserve insertion order. Changing the dictionary to an inventory extracts keys so as, permitting customary checklist indexing. That is the idiomatic Python answer for mapping numeric indices to dictionary keys.
[cpp-00036] useful=27 dangerous=2 :: For mathematical operations (GCD, LCM, factorial, prime checking, trigonometry), examine Python's math module FIRST earlier than implementing algorithms manually. Constructed-in capabilities like math.gcd(), math.factorial(), math.isqrt() are well-tested, optimized, and cut back implementation errors. Sample: (1) Perceive the mathematical definition, (2) Test if math module gives the operation, (3) Use it instantly or wrap it with problem-specific logic (e.g., is_coprime = math.gcd(a,b) == 1).
[cpp-00038] useful=0 dangerous=0 :: For checking if a quantity is an ideal sq., use math.isqrt() as a substitute of math.sqrt() to keep away from floating-point precision errors. Sample: b = math.isqrt(n); is_perfect_square = (b * b == n). The isqrt() operate returns the integer sq. root, and squaring it again permits precise integer comparability with out floating-point rounding points.
[cpp-00043] useful=0 dangerous=0 :: For character filtering issues (eradicating/maintaining characters primarily based on membership standards), use the set+comprehension+be a part of sample: (1) Convert filter standards right into a set for O(1) lookup (char_set = set(filter_string)), (2) Use checklist comprehension or generator expression to filter (char for char in supply if char not in char_set), (3) Use ''.be a part of() to reconstruct the string. This sample is extra Pythonic, readable, and maintainable than guide index manipulation or character counting approaches, whereas being equally appropriate and environment friendly.
[cpp-00049] useful=0 dangerous=0 :: When returning tuples or lists with blended numeric sorts (integers and floats), use acceptable division operators for every element: integer division (//) for complete quantity outcomes, common division (/) for decimal outcomes. Instance: for sum and common, return (n*(n+1)//2, n*(n+1)/2/n) to make sure sum is int and common is float. This prevents sort mismatches in check assertions.
[cpp-00054] useful=0 dangerous=0 :: For digit-by-digit comparability or manipulation issues (digit distance, digit sum variations, and so forth.): Use the string conversion sample: (1) Convert integers to strings with str(), (2) Use zfill(max_length) to pad shorter numbers with main zeros for equal size, (3) Use zip() to pair corresponding digit positions, (4) Apply operations on paired digits and combination outcomes. Instance: str(num1).zfill(size) and str(num2).zfill(size) then zip() for pairing. This handles different-length numbers elegantly and gives clear positional entry to digits.
[cpp-00056] useful=5 dangerous=0 :: For checking if all/any components in a group fulfill a situation, use Python's built-in all() or any() capabilities with generator expressions. Sample: all(situation for merchandise in iterable) for common quantification (all should fulfill), any(situation for merchandise in iterable) for existential quantification (at the least one satisfies). That is extra Pythonic, readable, and environment friendly than guide loops with flags. Frequent use circumstances: all(v == goal for v in dict.values()) for worth uniformity, any(x > threshold for x in checklist) for threshold checking, all(isinstance(x, int) for x in assortment) for sort validation.
[cpp-00060] useful=0 dangerous=0 :: For whitespace normalization (collapsing a number of areas/whitespace into single areas), use the split-join sample: ' '.be a part of(s.break up()). The important thing perception: str.break up() with out arguments has particular conduct - it splits on ANY whitespace (areas, tabs, newlines) AND mechanically removes empty strings from the end result, naturally collapsing consecutive whitespace. Mixed with ' '.be a part of(), this creates a clear answer with out regex imports. This sample is extra Pythonic and maintainable than regex alternate options like re.sub(r' +', ' ', s) for easy whitespace normalization duties.
[cpp-00062] useful=0 dangerous=0 :: For complicated quantity operations (polar/rectangular conversion, part calculation, magnitude), use Python's cmath module capabilities as the primary alternative: cmath.polar(z) for conversion to polar kind (returns magnitude and angle), cmath.rect(r, phi) for polar to rectangular, cmath.part(z) for angle extraction. These built-in capabilities deal with edge circumstances appropriately (e.g., treating actual numbers as complicated with imaginary half 0) and are extra dependable than guide trigonometric calculations. Sample: import cmath → use acceptable operate → deal with the return sort (typically tuples).
[cpp-00064] useful=0 dangerous=0 :: For grouping components by a key whereas preserving insertion order (essential for tie-breaking in subsequent sorting), use collections.OrderedDict with setdefault sample: from collections import OrderedDict; grouped = OrderedDict(); for merchandise in gadgets: grouped.setdefault(key, []).append(worth). Whereas Python 3.7+ dicts preserve insertion order, OrderedDict makes the intent specific and is safer when order issues for downstream operations like sorting by aggregated properties the place equal values ought to preserve authentic encounter order.
[cpp-00065] useful=0 dangerous=0 :: For creating tuples with variable-length unpacked components, use the * unpacking operator: (first, *middle_elements, final) unpacks an inventory/tuple into particular person tuple positions. Instance: (key, *values, depend) the place values is an inventory creates a tuple with key, all values unpacked as separate components, and depend on the finish. That is important when output format requires flattening nested constructions into single-level tuples with variable aspect counts.
[cpp-00069] useful=0 dangerous=0 :: For regex sample matching issues requiring full string matches, select between re.search(), re.match(), and re.fullmatch() primarily based on matching scope: re.match() matches from the beginning, re.search() finds patterns wherever, re.fullmatch() requires your entire string to match. When full string matching is required, both use re.fullmatch() with the sample instantly, or use re.search()/re.match() with specific anchors (^ for begin, $ for finish). Instance: re.fullmatch('a.*b', s) is equal to re.search('^a.*b$', s). Each approaches are legitimate - fullmatch() makes the intent specific, whereas search() with anchors gives extra flexibility. At all times analyze check circumstances to find out if partial or full string matching is required.
[cpp-00072] useful=1 dangerous=0 :: For counting components in an iterable that match a situation, use the generator expression sample with sum(): sum(1 for x in iterable if situation). This gives optimum stability of readability, reminiscence effectivity, and Pythonic type in comparison with alternate options like len([x for x in iterable if condition]) which creates an intermediate checklist. For character-level string operations, choose built-in string strategies (isdigit(), isalpha(), isalnum(), isupper(), islower()) over guide ASCII vary comparisons - they deal with edge circumstances appropriately, enhance readability, and are extra maintainable.
[cpp-00073] useful=0 dangerous=0 :: For bit manipulation issues (discovering set bits, MSB/LSB positions, bit counting), examine Python's integer bit strategies FIRST earlier than implementing guide algorithms: bit_length() returns the variety of bits wanted to symbolize the integer (helpful for MSB place), bit_count() counts set bits (Python 3.10+), as_integer_ratio() for rational illustration. These built-in strategies are optimized, deal with edge circumstances (together with 0), and infrequently eradicate the necessity for guide bit-by-bit iteration. Sample: perceive what bit property you want, examine if a built-in methodology gives it instantly.
[cpp-00076] useful=0 dangerous=0 :: For grouping consecutive equivalent components in a sequence, use itertools.groupby() because the canonical Python answer. Sample: [list(group) for key, group in itertools.groupby(sequence)]. The groupby operate returns (key, group_iterator) tuples the place secret is the aspect worth and group is an iterator of consecutive occurrences. Convert every group iterator to an inventory to materialize outcomes. Crucial distinction: groupby teams CONSECUTIVE equivalent components solely - non-consecutive duplicates kind separate teams, making it splendid for run-length encoding and consecutive duplicate detection with out guide index monitoring.
## H&LING EDGE CASES
[hec-00021] useful=2 dangerous=0 :: When utilizing mathematical operations like modulo (%), division, or exponentiation, confirm the answer handles adverse numbers appropriately. For instance, modulo operator works appropriately for each constructive and adverse integers in Python (e.g., -18 % 2 == 0 for even quantity checking), however conduct might differ from expectations in different languages.
## ALGORITHM DESIGN
[ad-00001] useful=1 dangerous=2 :: For recursive GCD issues, use the Euclidean algorithm: base case is b == 0 (return a), recursive case is gcd(b, a % b). This handles all edge circumstances naturally together with argument ordering, equal numbers, and divisibility.
[ad-00006] useful=0 dangerous=0 :: For bidirectional character swap issues (A↔B) utilizing regex: use re.sub() with a callback operate in a single move. Sample: (1) Create a personality class matching all swap targets (e.g., r'[ _]'), (2) Implement callback that examines every match and returns its counterpart. This avoids ambiguity from sequential replacements the place new characters grow to be indistinguishable from originals.
[ad-00008] useful=0 dangerous=0 :: For modular arithmetic issues (nCr mod p, and so forth.), examine if p have to be prime. If p might be composite, keep away from algorithms requiring modular inverse (like Fermat's Little Theorem). As a substitute, use approaches that keep away from division completely, corresponding to Pascal's triangle with DP: C[j] = (C[j] + C[j-1]) % p, which works for ANY modulus.
[ad-00009] useful=0 dangerous=0 :: When division is required in modular arithmetic: (1) If modulus is assured prime, use Fermat's Little Theorem: a/b mod p = a * b^(p-2) mod p. (2) If modulus could also be composite, use Prolonged Euclidean Algorithm for modular inverse, or higher but, redesign to keep away from division (e.g., use recurrence relations like Pascal's triangle).
[ad-00017] useful=1 dangerous=0 :: For decoding issues with blended encoded/non-encoded components: (1) use sort checking to tell apart aspect sorts, (2) validate encoded aspect construction, (3) deal with every sort appropriately in a single move. Prioritize easy iterative approaches with specific conditionals over complicated comprehensions for higher readability and maintainability.
[ad-00018] useful=4 dangerous=0 :: For optimum sum issues with non-adjacent aspect constraints: Use dynamic programming with recurrence dp[i] = max(arr[i] + dp[i-2], dp[i-1]), representing the selection to incorporate present aspect (add to finest from i-2) or exclude it (hold finest from i-1). Deal with edge circumstances: empty array returns 0, single aspect returns that aspect, initialize dp[0] = arr[0] and dp[1] = max(arr[0], arr[1]). Time: O(n), House: O(n) or O(1) with optimization.
[ad-00023] useful=0 dangerous=0 :: For bit counting and parity checking issues: A number of legitimate approaches exist with completely different trade-offs. (1) Pythonic strategy: bin(n).depend('1') - most readable and maintainable, (2) Bit manipulation: repeatedly use x & (x-1) to clear lowest set bit - higher efficiency for big inputs, (3) XOR discount for parity. Select the Pythonic strategy by default except efficiency profiling reveals it is a bottleneck.
[ad-00028] useful=1 dangerous=1 :: For bit toggling issues: (1) Create a masks with 1s at positions to be toggled, (2) Use XOR operation (n ^ masks) to toggle these bits. For variable-length numbers, use bit_length() to find out what number of bits to course of. Instance: to toggle bits at positions 1,3,5 as much as bit_length, generate masks = sum(1 << i for i in vary(1, n.bit_length(), 2)).
[ad-00037] useful=0 dangerous=0 :: For aspect rearrangement/partitioning issues (transfer zeros to finish, separate by situation, and so forth.): Use the filter+concatenate sample: (1) filter components into separate teams utilizing checklist comprehensions [x for x in lst if condition], (2) depend or gather every group individually, (3) concatenate teams in required order. This Pythonic strategy utilizing built-ins (checklist comprehension, depend(), checklist multiplication) is usually clearer and equally appropriate in comparison with in-place two-pointer algorithms, particularly for small to medium datasets.
[ad-00039] useful=0 dangerous=0 :: For 'sum of two squares' issues (checking if n = a² + b²): Use single-loop optimization O(√n) as a substitute of nested loops O(n). Iterate one variable from 0 to √n, calculate the rest (n - a²), and examine if the rest is an ideal sq. utilizing math.isqrt(). Return True instantly upon discovering legitimate pair. This sample: (1) reduces time complexity, (2) handles edge circumstances naturally (a=0, a=√n), (3) avoids floating-point errors with isqrt().
[ad-00041] useful=4 dangerous=1 :: For geometry and formula-based mathematical issues: Observe a structured strategy: (1) Determine the proper mathematical formulation from drawback area information, (2) Implement the formulation as a direct translation into code utilizing math module capabilities, (3) Keep away from reimplementing mathematical capabilities or constants that exist in customary libraries, (4) Confirm the formulation with at the least one check case earlier than coding. Direct formulation translation results in cleaner, extra maintainable code with higher numerical precision.
[ad-00042] useful=0 dangerous=0 :: For issues deciding on components from each ends of a group (okay smallest AND okay largest), use approaches that deal with overlap: (1) Index-based choice: iterate sorted assortment and embody components the place idx < okay OR idx >= len-k, making certain every aspect chosen as soon as, or (2) Set union: mix subsets with set(min_k + max_k) then kind to eradicate duplicates. At all times take into account edge circumstances the place okay*2 >= collection_size, as this ensures overlap between minimal and most choices. Keep away from easy checklist concatenation which creates duplicates when ranges overlap.
[ad-00045] useful=0 dangerous=0 :: For 'discover the n-th quantity with property X' issues: Use the iterative counting sample: (1) implement a helper operate to examine if a quantity satisfies the property, (2) iterate by means of candidate numbers ranging from an acceptable preliminary worth, (3) preserve a counter for numbers that fulfill the property, (4) return the candidate when counter reaches n. This sample works for prime numbers, good squares, numbers with particular factorization properties, and so forth. It is simple to implement appropriately and optimize later if wanted.
[ad-00046] useful=3 dangerous=0 :: For counting distinct prime components: Use the usual factorization sample: (1) iterate potential divisors from 2 to sqrt(n), (2) for every divisor that divides n, increment the distinct issue depend, then divide n by that divisor repeatedly till it not divides (this ensures every prime is counted as soon as no matter its energy), (3) after the loop, if n > 1, it is a remaining prime issue (depend it), (4) optimize by checking divisor 2 individually, then solely odd numbers. This appropriately distinguishes between distinct primes and their multiplicities.
[ad-00048] useful=1 dangerous=0 :: For mathematical sequence issues (sum of first n numbers, arithmetic/geometric collection, factorial-related), examine if a closed-form formulation exists earlier than implementing iterative options. Frequent formulation: sum(1..n) = n*(n+1)/2, sum of arithmetic collection = n*(first+final)/2, sum of geometric collection = a*(r^n - 1)/(r-1). Formulation-based options present O(1) time complexity vs O(n) for loops, are much less error-prone, and show mathematical perception. At all times confirm formulation correctness with check circumstances.
[ad-00051] useful=1 dangerous=0 :: For pair-counting issues (depend pairs satisfying a situation), search for mathematical properties that eradicate the necessity for specific enumeration. Sample: (1) Determine what makes a pair legitimate, (2) Discover mathematical properties characterizing legitimate pairs (e.g., for XOR being odd: one quantity have to be even, different odd), (3) Rework right into a counting drawback (depend components in every class), (4) Use combinatorics to compute end result (e.g., odd_count × even_count). This reduces O(n²) pair enumeration to O(n) categorization + O(1) calculation.
[ad-00052] useful=0 dangerous=0 :: For issues involving XOR operations, leverage bit-level properties for optimization: (1) XOR result's odd ⟺ operands have completely different parities (one even, one odd), as a result of parity will depend on the least important bit, (2) XOR is commutative and associative, permitting reordering, (3) x ^ x = 0 and x ^ 0 = x, helpful for cancellation patterns. Analyze the particular XOR property related to your drawback to search out mathematical shortcuts that keep away from brute pressure computation.
[ad-00061] useful=0 dangerous=0 :: For iterative mathematical sequence issues (sum/product of first n phrases with particular properties): Use a structured 3-step strategy: (1) Determine the formulation for producing the k-th aspect (e.g., 2k-1 for odd numbers, 2k for even numbers, k² for squares), (2) Decide the operation to use to every aspect (exponentiation, multiplication, transformation), (3) Mixture with acceptable operate (sum, product, max). Implement utilizing generator expressions with built-ins: sum(operation(formulation(i)) for i in vary(begin, n+1)). Guarantee vary bounds match the sequence indexing (1-indexed sequences want vary(1, n+1)). This sample gives readability and correctness for issues the place closed-form formulation do not exist or aren't apparent.
[ad-00066] useful=0 dangerous=0 :: For issues requiring grouping, counting, and sorting by aggregated properties: (1) Group components utilizing dict/OrderedDict with setdefault() or defaultdict, selecting OrderedDict when insertion order impacts tie-breaking in sorting, (2) Type teams utilizing sorted() with key operate primarily based on aggregated metric (e.g., key=lambda x: len(x[1]) for depend), (3) Rework output to match required format utilizing acceptable unpacking/restructuring. This sample handles 'group by X, kind by depend of Y' issues systematically.
[ad-00068] useful=0 dangerous=0 :: For heap-based 'high okay' issues, confirm OUTPUT ORDERING in opposition to check circumstances, not simply which components to return. Key distinction: (1) heappop() from a min-heap produces ASCENDING order by the heap key, (2) heapq.nlargest(okay, gadgets, key=func) produces DESCENDING order by key, (3) heapq.nsmallest(okay, gadgets, key=func) produces ASCENDING order by key. When implementing heap options, hint by means of check circumstances to find out if outcomes needs to be ordered ascending or descending by frequency/precedence. If ordering is improper, both reverse the ultimate checklist or swap between nlargest/nsmallest, or use the heappop sample. Take a look at case output ordering is authoritative when the issue description does not explicitly specify.
[ad-00070] useful=0 dangerous=0 :: For 2D grid issues with adjacency or choice constraints (cannot choose adjoining cells/rows/columns): Search for alternatives to cut back dimensionality earlier than making use of DP. If constraints enable choosing at most one aspect per column (or row), pre-compute the optimum alternative for every column/row (e.g., max of two rows in a column), remodeling the issue right into a 1D array. Then apply customary 1D DP patterns (like 'home robber' for non-adjacency). This dimensional discount simplifies state house and makes complicated grid issues tractable utilizing well-known DP templates.
[ad-00071] useful=0 dangerous=0 :: Acknowledge the 'home robber' DP sample as a elementary template relevant past linear arrays: any drawback involving deciding on non-adjacent components to maximise/reduce a sum can use the recurrence dp[i] = max(worth[i] + dp[i-2], dp[i-1]). This sample seems in: linear arrays with spacing constraints, grid issues (after dimensional discount), tree issues (with parent-child constraints), and sequence optimization. Whenever you see 'maximize sum' + 'cannot choose adjoining', instantly take into account this template.
[ad-00075] useful=0 dangerous=0 :: For locating essentially the most important bit (MSB) worth or place: Use bit_length() methodology which returns the variety of bits required to symbolize an integer. For MSB worth, use the sample: 1 << (n.bit_length() - 1), which leverages the connection that the MSB at place okay (0-indexed from proper) has worth 2^okay. The bit_length() strategy is cleaner than guide division loops or string conversion strategies. Deal with edge case: bit_length() returns 0 for n=0, so confirm drawback constraints or add specific zero dealing with if wanted.
## TEST CASE INTERPRETATION
[tci-00004] useful=0 dangerous=0 :: A number of appropriate implementations can exist for a similar drawback. Give attention to algorithmic correctness verified by passing exams, not on matching a selected reference implementation's type or construction.
[tci-00011] useful=123 dangerous=2 :: Extract the anticipated OUTPUT FORMAT from check circumstances, not simply the logic. Test if the return needs to be a single worth, tuple, checklist, or different construction, and guarantee your answer matches this precise format.
[tci-00022] useful=0 dangerous=1 :: When analyzing check circumstances, examine if ALL inputs map to the SAME output worth or construction. If that's the case, the answer could also be trivial - merely return that fixed output instantly. Do not overcomplicate with pointless transformations (like checklist conversions) when a direct return assertion satisfies all necessities. Instance: if all check circumstances count on empty tuple output, return () no matter enter complexity.
[tci-00025] useful=5 dangerous=0 :: Earlier than selecting an implementation strategy, deeply perceive the CORE REQUIREMENT from the issue description and check circumstances. For instance, 'even parity' means 'even depend of 1-bits', not a selected algorithm. Do not lock into a specific approach (like bit manipulation) if easier alternate options (like string counting) fulfill the requirement equally effectively.
[tci-00027] useful=17 dangerous=0 :: When drawback descriptions use ambiguous terminology (particularly in bit manipulation: 'even bits', 'odd positions', and so forth.), work backward from check circumstances to find the precise sample. Manually hint by means of examples of their related illustration (binary for bit issues) to find out the bottom fact interpretation. Take a look at circumstances are authoritative when terminology is unclear.
[tci-00032] useful=0 dangerous=0 :: When issues ask for 'most/minimal of all data/teams', make clear whether or not it means: (1) international excessive throughout all components, or (2) per-group extremes returned as a group. Take a look at circumstances reveal the excellence: single worth output signifies international excessive, checklist/tuple output suggests per-group evaluation. This interpretation impacts whether or not you flatten the construction or protect grouping.
[tci-00034] useful=0 dangerous=0 :: For dictionary-related issues, rigorously distinguish from check circumstances whether or not the anticipated output is: (1) a key (string/int), (2) a worth, (3) a key-value pair (tuple), or (4) a group of any of those. The output sort determines whether or not you want dict.keys(), dict.values(), dict.gadgets(), or direct indexing into transformed constructions. Take a look at case outputs reveal the precise format required.
[tci-00035] useful=3 dangerous=0 :: When operate names or drawback descriptions recommend particular conduct (e.g., 'parallelogram_perimeter' implying geometric formulation 2*(a+b)), however check circumstances produce outputs inconsistent with that expectation, belief the check circumstances because the authoritative specification. Reverse-engineer the precise formulation by calculating what operation on inputs produces the given outputs, then confirm this derived sample in opposition to ALL check circumstances earlier than implementing. Take a look at case expectations override semantic meanings and area information.
[tci-00040] useful=0 dangerous=0 :: Take a look at outcomes are the first sign of correctness, not line-by-line comparability with reference implementations. In case your answer passes all exams with higher time complexity (e.g., O(√n) vs O(n)), it isn't simply appropriate however algorithmically superior. Totally different approaches might be equally or extra legitimate - concentrate on correctness verification by means of exams, not on matching particular implementation kinds.
[tci-00044] useful=2 dangerous=0 :: When encountering undefined or domain-specific mathematical phrases (like 'good quantity', 'fortunate quantity', and so forth.), deal with check circumstances because the authoritative specification. Systematically analyze check case outputs to reverse-engineer the mathematical definition: (1) look at the numerical properties of output values (factorization, divisors, digits, and so forth.), (2) search for patterns or widespread traits throughout all outputs, (3) formulate a speculation in regards to the defining property, (4) confirm the speculation in opposition to ALL check circumstances. The check circumstances encode the whole definition when the issue assertion is ambiguous.
[tci-00055] useful=2 dangerous=0 :: When drawback terminology is totally ambiguous or undefined (like 'digit distance' which might have a number of interpretations), systematically hint by means of EACH check case manually to determine the precise sample: (1) Work by means of inputs and outputs step-by-step within the related illustration, (2) Formulate a speculation about what operation produces these outputs, (3) Confirm the speculation in opposition to ALL remaining check circumstances, (4) Implement the sample that satisfies all exams. The test-derived sample is the proper specification, no matter what the terminology would possibly recommend in different contexts.
[tci-00058] useful=0 dangerous=0 :: A number of algorithmically completely different options might be equally legitimate in the event that they fulfill all check circumstances. When deriving necessities from ambiguous specs, use systematic speculation testing: (1) analyze every check case to know input-output relationships, (2) formulate a speculation in regards to the underlying rule, (3) validate the speculation in opposition to ALL check circumstances, (4) implement the sample that passes all exams. Your answer is appropriate by definition if it satisfies all check necessities, even when it differs structurally from reference implementations or makes use of a special interpretation of ambiguous phrases.
[tci-00063] useful=0 dangerous=0 :: In Python, parentheses alone do not create tuples - distinguish between ('worth') which is only a string 'worth' (parentheses are for grouping/priority), and ('worth',) which is a 1-element tuple (trailing comma required). When analyzing check assertions like assert func()==('Matched!'), acknowledge this expects a plain string, not a tuple. Solely ('Matched!',) with a trailing comma or (a, b) with a number of components create tuples. This syntax nuance is essential for matching anticipated return sorts precisely.
[tci-00067] useful=0 dangerous=0 :: When check circumstances present complicated output constructions (tuples with variable-length unpacked components, nested aggregations), analyze the EXACT construction earlier than coding: (1) Depend components in output tuples/lists, (2) Determine which components are aggregated vs particular person, (3) Decide if nested constructions are flattened (unpacked) or preserved, (4) Test if ordering inside teams issues. Use this structural evaluation to decide on acceptable Python constructs (* unpacking, checklist flattening, tuple building patterns) that match the anticipated format exactly.
[tci-00077] useful=0 dangerous=0 :: For counting/aggregation issues involving nested constructions (lists of lists, timber, nested dictionaries), when the issue asks to 'depend components' with out specifying the extent, use check circumstances to find out the counting scope: (1) Test if check outputs recommend counting solely instant/top-level kids (e.g., len(outer_list)) vs recursive counting of all nested components, (2) Hint by means of at the least one check case with nested constructions to see which interpretation produces the anticipated output, (3) The best interpretation (top-level counting) is normally appropriate except check circumstances show in any other case. Instance: 'depend lists in [[1,2], [3], [[4,5]]]' might imply 3 (top-level) or 4 (recursive) - check outputs reveal which is predicted.
[tci-00078] useful=0 dangerous=0 :: For mathematical issues with infinitely many legitimate options (linear Diophantine equations, modular arithmetic, geometric constructions, and so forth.), acknowledge that exams count on ONE PARTICULAR answer, not simply any mathematically appropriate reply. Work by means of check circumstances to determine the choice standards (e.g., smallest non-negative values, particular ordering, canonical kind). When selecting algorithms, choose approaches that naturally produce the anticipated answer sample (e.g., iterative search from x=0 upward for smallest non-negative x) over subtle algorithms (e.g., Prolonged Euclidean Algorithm) that require further adjustment logic to match check expectations. The mathematically elegant answer is not at all times the proper one for passing exams.
[tci-00079] useful=0 dangerous=0 :: For issues involving mathematical constants (pi, e, sqrt(2), and so forth.), confirm that check case anticipated outputs match calculations utilizing customary library constants (math.pi, math.e). Calculate at the least one check case output manually utilizing the usual fixed and evaluate to the anticipated worth. If there is a mismatch in precision (e.g., your 942.477 vs anticipated 942.45), the check circumstances possible count on a simplified/truncated fixed worth (like pi=3.14 or pi=3.1415) fairly than full precision. Test reference implementations for hardcoded fixed values and use these precise values to match check expectations, even when they're much less mathematically correct.
## DEBUGGING STRATEGIES
[ds-00005] useful=110 dangerous=2 :: Earlier than producing code, mentally hint by means of the logic in opposition to check circumstances to confirm correctness. This helps catch logical errors early and builds confidence within the answer strategy.
[ds-00029] useful=0 dangerous=0 :: For bit manipulation issues with unclear place indexing, check a number of interpretations systematically: (1) 0-indexed vs 1-indexed, (2) counting from proper vs left, (3) 'even/odd' referring to place vs bit worth. Work by means of all check circumstances manually in binary to validate every speculation earlier than implementing. The interpretation that satisfies all check circumstances is appropriate.
[ds-00080] useful=0 dangerous=0 :: Throughout reasoning part, manually calculate anticipated outputs for at the least one check case utilizing your proposed strategy and evaluate in opposition to the precise anticipated output. For numerical issues, confirm precision matches precisely - discrepancies like 942.477 vs 942.45 point out fixed precision mismatches (e.g., utilizing math.pi as a substitute of a truncated worth). This early validation catches precision points, improper formulation, and fixed worth issues earlier than code era.
These outcomes present that ACE can considerably enhance efficiency on complicated duties like code era.
Abstract
On this article, we’ve explored so much about context engineering and the ACE strategy, so let’s briefly recap the important thing takeaways:
- Context engineering has emerged as a essential area as a result of it permits us to enhance LLM efficiency with out prolonged and expensive fine-tuning.
- ACE (Agentic Context Engineering) is among the newest approaches to immediate optimisation, leveraging detailed playbooks with atomised bullet factors that embody each directions and metadata.
- As our examples confirmed, immediate optimisation isn’t a silver bullet. It doesn’t enhance efficiency in each case. In keeping with the authors, ACE is simplest for agentic workflows or extremely specialised domains. In our experiments, it made a transparent distinction in code era, however had restricted affect on banking intent classification.
The primary takeaway for me is that immediate optimisation received’t remedy your process mechanically. You continue to want a holistic understanding of what data the LLM and brokers have in the course of the optimisation course of and the way finest to construction and refine it. Context issues, and considerate engineering of that context is what makes approaches like ACE efficient.
Thanks for studying. I hope this text was insightful. Bear in mind Einstein’s recommendation: “The necessary factor is to not cease questioning. Curiosity has its personal purpose for current.” Might your curiosity lead you to your subsequent nice perception.
Reference
This text was primarily based on the paper and analysis by Zhang et al., printed in 2025, “Agentic Context Engineering: Evolving Contexts for Self-Bettering Language Fashions”.
[ad_2]