Within the Creator Highlight collection, TDS Editors chat with members of our neighborhood about their profession path in knowledge science and AI, their writing, and their sources of inspiration. As we speak, we’re thrilled to share our dialog with Mariya Mansurova.
Mariya’s story is one among perpetual studying. Beginning with a robust basis in software program engineering, arithmetic, and physics, she’s spent extra thanover 12 years constructing experience in product analytics throughout industries, from engines like google and analytics platforms to fintech. Her distinctive path, together with hands-on expertise as a product supervisor, has given her a 360-degree view of how analytical groups can assist companies make the correct selections.
Now serving as a Product Analytics Supervisor, she attracts power from discovering contemporary insights and progressive approaches. Every of her articles on In direction of Information Science displays her newest “aha!” second: a testomony to her perception that curiosity drives actual progress.
You’ve written extensively about agentic AI and frameworks like smolagents and LangGraph. What excites you most about this rising area?
I first began exploring generative AI largely out of curiosity and, admittedly, a little bit of FOMO. Everybody round me gave the impression to be utilizing LLMs or a minimum of speaking about them. So I carved out time to get hands-on, beginning with the very fundamentals like prompting methods and LLM APIs. And the deeper I went, the extra excited I grew to become.
What fascinates me probably the most is how agentic programs are shaping the way in which we reside and work. I imagine that this affect will solely proceed to develop over time. That’s why I exploit each likelihood to make use of agentic instruments like Copilot or Claude Desktop or construct my very own brokers utilizing applied sciences like smolagents, LangGraph or CrewAI.
Probably the most impactful use case of Agentic AI for me has been coding. It’s genuinely spectacular how instruments like GitHub Copilot can enhance the pace and the standard of your work. Whereas latest analysis from METR has questioned whether or not the effectivity positive aspects are actually that substantial, I undoubtedly discover a distinction in my day-to-day work. It’s particularly useful with repetitive duties (like pivoting tables in SQL) or when working with unfamiliar applied sciences (like constructing an online app in TypeScript). General, I’d estimate a couple of 20% improve in pace. However this increase isn’t nearly productiveness; it’s a paradigm shift that additionally expands what feels doable. I imagine that as agentic instruments proceed to evolve, we’ll see a rising effectivity hole between people and firms which have discovered methods to leverage these applied sciences and people who haven’t.
Relating to analytics, I’m particularly enthusiastic about automated reporting brokers. Think about an AI that may pull the correct knowledge, create visualisations, carry out root trigger evaluation the place wanted, be aware open questions and even create the primary draft of the presentation. That might be simply magical. I’ve constructed a prototype that generates such KPI narratives. And though there’s a big hole between the prototype and a manufacturing answer that works reliably, I imagine we’ll get there.
You’ve written three articles below the “Sensible Laptop Simulations for Product Analysts” collection. What impressed that collection, and the way do you assume simulation can reshape product analytics?
Simulation is a vastly underutilised software in product analytics. I wrote this collection to point out folks how highly effective and accessible the simulations will be. In my day-to-day work, I maintain encountering what-if questions like “What number of operational brokers will we’d like if we add this KYC management?” or “What’s the possible impression of launching this characteristic in a brand new market?”. You may simulate any system, regardless of how advanced. So, simulations gave me a strategy to reply these questions quantitatively and pretty precisely, even when arduous knowledge wasn’t but accessible. So I’m hoping extra analysts will begin utilizing this method.
Simulations additionally shine when working with uncertainty and distributions. Personally, I favor bootstrap strategies to memorising a protracted listing of statistical formulation and significance standards. Simulating the method usually feels extra intuitive, and it’s much less error-prone in follow.
Lastly, I discover it fascinating how applied sciences have modified the way in which we do issues. With at the moment’s computing energy, the place any laptop computer can run 1000’s of simulations in minutes and even seconds, we are able to simply clear up issues that will have been difficult simply thirty years in the past. That’s a game-changer for analysts.
A number of of your posts concentrate on transitioning LLM functions from prototype to manufacturing. What frequent pitfalls do you see groups make throughout that section?
By way of follow, I’ve found there’s a big hole between LLM prototypes and manufacturing options that many groups underestimate. The commonest pitfall is treating prototypes as in the event that they’re already production-ready.
The prototype section will be deceptively clean. You may construct one thing useful in an hour or two, take a look at it on a handful of examples, and really feel such as you’ve cracked the issue. Prototypes are nice instruments to show feasibility and get your workforce excited concerning the alternatives. However right here’s the place groups usually stumble: these early variations present no ensures round consistency, high quality, or security when dealing with numerous, real-world eventualities.
What I’ve discovered is that profitable manufacturing deployment begins with rigorous analysis. Earlier than scaling something, you want clear definitions of what “good efficiency” appears to be like like when it comes to accuracy, tone of voice, pace and some other standards particular to your use case. Then it’s essential to monitor these metrics repeatedly as you iterate, making certain you’re truly bettering fairly than simply altering issues.
Consider it like software program testing: you wouldn’t ship code with out correct testing, and LLM functions require the identical systematic method. This turns into particularly essential in regulated environments like fintech or healthcare, the place it’s essential to display reliability not simply to your inside workforce however to compliance stakeholders as properly.
In these regulated areas, you’ll want complete monitoring, human-in-the-loop assessment processes, and audit trails that may stand up to scrutiny. The infrastructure required to help all of this usually takes much more improvement time than constructing the unique MVP. That’s one thing that persistently surprises groups who focus totally on the core performance.
Your articles generally mix engineering ideas with knowledge science/analytics greatest practices, comparable to your “Prime 10 engineering classes each knowledge analyst ought to know.” Do you assume the road between knowledge and engineering is blurring?
The position of a knowledge analyst or a knowledge scientist at the moment usually requires a mixture of expertise from a number of disciplines.
- We write code, so we share frequent floor with software program engineers.
- We assist product groups assume by technique and make selections, so product administration expertise are helpful.
- We draw on statistics and knowledge science to construct rigorous and complete analyses.
- And to make our narratives compelling and really affect selections, we have to grasp the artwork of communication and visualisation.
Personally, I used to be fortunate to realize numerous programming expertise early on, again at college and college. This background helped me tremendously in analytics: it elevated my effectivity, helped me collaborate higher with engineers and taught me methods to construct scalable and dependable options.
I strongly encourage analysts to undertake software program engineering greatest practices. Issues like model management programs, testing and code assessment assist analytical groups to develop extra dependable processes and ship higher-quality outcomes. I don’t assume the road between knowledge and engineering is disappearing completely, however I do imagine that analysts who embrace an engineering mindset can be far more practical in trendy knowledge groups.
You’ve explored each causal inference and cutting-edge LLM tuning methods. Do you see these as a part of a shared toolkit or separate mindsets?
That’s truly an amazing query. I’m a robust believer that each one these instruments (from statistical strategies to trendy ML methods) belong in a single toolkit. As Robert Heinlein famously mentioned, “Specialisation is for bugs.”
I consider analysts as knowledge wizards who assist their product groups clear up their issues utilizing no matter instruments match the very best: whether or not it’s constructing an LLM-powered classifier for NPS feedback, utilizing causal inference to make strategic selections, or constructing an online app to automate workflows.
Reasonably than specialising in particular expertise, I favor to concentrate on the issue we’re fixing and maintain the toolset as broad as doable. This mindset not solely results in higher outcomes but in addition fosters a steady studying tradition, which is important in at the moment’s fast-moving knowledge trade.
You’ve coated a broad vary of subjects, from textual content embeddings and visualizations to simulation and multi AI agent. What writing behavior or tenet helps you retain your work so cohesive and approachable?
I normally write about subjects that excite me in the intervening time, both as a result of I’ve simply discovered one thing new or had an fascinating dialogue with colleagues. My inspiration usually comes from on-line programs, books or my day-to-day duties.
Once I write, I all the time take into consideration my viewers and the way this piece will be genuinely useful each for others and for my future self. I attempt to clarify all of the ideas clearly and go away breadcrumbs for anybody who needs to dig deeper. Over time, my weblog has turn into a private data base. I usually return to outdated posts: generally simply to repeat a code snippet, generally to share a useful resource with a colleague who’s engaged on one thing comparable.
As everyone knows, all the pieces in knowledge is interconnected. Fixing a real-world drawback usually requires a mixture of instruments and approaches. For instance, if you happen to’re estimating the impression of launching in a brand new market, you may use simulation for state of affairs evaluation, LLMs to discover buyer expectations, and visualisation to current the ultimate suggestion.
I attempt to replicate these connections in my writing. Applied sciences evolve by constructing on earlier breakthroughs, and understanding the foundations helps you go deeper. That’s why a lot of my posts reference one another, letting readers comply with their curiosity and uncover how totally different items match collectively.
Your articles are impressively structured, usually strolling readers from foundational ideas to superior implementations. What’s your course of for outlining a posh piece earlier than you begin writing?
I imagine I developed this fashion of presenting data in class, as these habits have deep roots. Because the e book The Tradition Map explains, totally different cultures fluctuate in how they construction communication. Some are concept-first (ranging from fundamentals and iteratively shifting to conclusions), whereas others are application-first (beginning with outcomes and diving deeper as wanted). I’ve undoubtedly internalised the concept-first method.
In follow, a lot of my articles are impressed by on-line programs. Whereas watching a course, I define the tough construction in parallel so I don’t neglect any essential nuances. I additionally be aware down something that’s unclear and mark it for future studying or experimentation.
After the course, I begin serious about methods to apply this information to a sensible instance. I firmly imagine you don’t actually perceive one thing till you strive it your self. Though many of the programs have sensible examples, they’re usually too polished. So, solely if you apply the identical concepts in your personal use case will you run into edge circumstances and friction factors. For instance, the course may use OpenAI fashions, however I’d wish to strive an area mannequin, or the default system immediate within the framework doesn’t work for my explicit case and wishes tweaking.
As soon as I’ve a working instance, I transfer to writing. I favor separate drafting from modifying. First, I concentrate on getting all my concepts and code down with out worrying about grammar or tone. Then I shift into modifying mode: refining the construction, selecting the best visuals, placing collectively the introduction, and highlighting the important thing takeaways.
Lastly, I learn the entire thing end-to-end from the start to catch something I’ve missed. Then I ask my associate to assessment it. They usually deliver a contemporary perspective and level out issues I didn’t take into account, which helps make the article extra complete and accessible.
To study extra about Mariya‘s work and keep up-to-date along with her newest articles, comply with her right here on TDS and on LinkedIn.