✨ Explore this awesome post from TechCrunch 📖
📂 **Category**: AI,Exclusive,interview,Microsoft,Microsoft Foundry
✅ **What You’ll Learn**:
For 24 years, Microsoft’s Amanda Silver has been helping developers — and in the last few years, that’s meant building AI tools. After a long stint at GitHub Copilot, Silver is now a corporate vice president at Microsoft’s CoreAI division, where she works on tools for deploying applications and agent systems within enterprises.
Her work focuses on the Foundry system within Azure, which is designed as a unified AI gateway for enterprises, giving her a close-up view of how companies actually use these systems and where deployments end up failing.
I spoke with Silver about the current capabilities of enterprise agents, and why she believes this is the biggest opportunity for startups since the public cloud.
This interview has been edited for length and clarity.
So, your work focuses on Microsoft products for outside developers — often startups that aren’t focused on AI. How do you see the impact of artificial intelligence on these companies?
I see this as a watershed moment for startups as profound as moving to the public cloud. If you think about it, the cloud has had a huge impact on startups because it means they no longer need real estate space to host their racks, they no longer need to spend a lot of money on capital injections to get hardware to be hosted in their labs and things like that. Everything became cheaper. Now agent AI will continue to drive down the overall cost of software operations once again, because many of the functions involved in setting up a new project – whether it’s support people, or legal investigations – many of which can be done faster and cheaper with AI agents. I think this will lead to more projects and more startups being launched. Hence we will see higher value startups with fewer people at the helm. I think this is an exciting world.
What does that look like in practice?
TechCrunch event
Boston, MA
|
June 23, 2026
We’re certainly seeing multi-step agents being used very widely for all kinds of different programming tasks, right? For example, one thing developers need to do to maintain their code base is to stay up to date with the latest versions of the libraries they rely on. You may be relying on an older version of the dot-net runtime or Java SDK. We can make these agent systems think about the entire code base and update it more easily, while reducing the time it takes by 70% or 80%. There must be a multi-step agent deployed to do this.
Live site operations are another thing – if you’re thinking about maintaining a website or a service and something goes wrong, there’s going to be noise in the night, and someone has to be on call to wake up to go respond to the incident. We still have people on call 24/7, just in case the service goes down. But it was a really hated job because you were often woken up by these little accidents. And we’ve now built a genetic system to successfully diagnose and in many cases mitigate problems that arise in these live site operations so that humans don’t have to wake up in the middle of the night and go to their stations and try to diagnose what’s going on. This also helps us significantly reduce the average time it takes to resolve an incident.
One of the other mysteries of this current moment is that customer deployments are not happening as quickly as we expected even six months ago. I’m curious why you think that.
If you think about people who work in the construction industry, what keeps them from being successful, in many cases, it comes down to not knowing the real purpose of the agent. There is a cultural change that needs to happen in how people build these systems. What business use case are they trying to solve? What are they trying to achieve? You need to be very clear about your definition of success for this agent. And you have to think about, what data am I giving the agent so they can think about how to accomplish this particular task?
We see these things as a bigger stumbling block, more so than the general uncertainty about allowing client deployment. Anyone who goes and looks at these systems sees the return on investment.
You mentioned the general uncertainty, which I think looks like a big drag from the outside. Why do you consider it less of a problem in practice?
First of all, I think it will be very common for agent systems to have human-in-the-loop scenarios. Think of something like returning a package. It used to be that you had a return processing workflow that was 90% automated and 10% human intervention, where someone had to go look at the package and had to make a judgment about how damaged the package was before deciding to accept the return.
This is a perfect example where computer vision models are now so good that in many cases, we don’t need as much human oversight to inspect the packet and make that decision. There will still be some edge cases, where the computer vision may not be good enough to make a call, and there may be escalation. It’s kind of like, how often do you need to call the manager?
There are some things that will always need some sort of human oversight, because they are critical processes. Consider incurring a contractual legal obligation, or deploying code into a production code base that could impact the reliability of your systems. But even then, there is a question about how far we can go in automating the rest of the process.
💬 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#artificial #intelligence #changing #startup #accounts #Microsoft #vice #president**
🕒 **Posted on**: 1770889524
🌟 **Want more?** Click here for more info! 🌟
