Reliability and predictability
The way in which we work together with computer systems as we speak is predictable. As an example, once we construct software program methods, an engineer sits and writes code, telling the pc precisely what to do, step-by-step. With an agentic AI course of, we don’t present step-by-step directions. Relatively, we lead with the result we wish to obtain, and the agent determines how you can attain this aim. The software program agent has a level of autonomy, which implies there could be some randomness within the outputs.
We noticed the same concern with ChatGPT and different LLM-based generative AI methods once they first debuted. However within the final two years, we’ve seen appreciable enhancements within the consistency of generative AI outputs, because of fine-tuning, human suggestions loops, and constant efforts to coach and refine these fashions. We’ll must put the same stage of effort into minimizing the randomness of agentic AI methods to make them extra predictable and dependable.
Information privateness and safety
Some firms are hesitant to make use of agentic AI resulting from privateness and safety considerations, that are just like these with generative AI however could be much more regarding. For instance, when a consumer engages with a giant language mannequin, each bit of knowledge given to the mannequin turns into embedded in that mannequin. There’s no method to return and ask it to “overlook” that data. Some sorts of safety assault, similar to immediate injection, benefit from this by making an attempt to get the mannequin to leak proprietary data. As a result of software program brokers have entry to many alternative methods with a excessive stage of autonomy, there’s an elevated threat that it may expose non-public knowledge from extra sources.