As AI becomes the way people use software, teams need to open up to agents while staying in control. This session shows how Agent IAM and Agent Analytics help you set real guardrails when agents read, write, or trigger actions in your product.
The demo highlights blocking risky calls, step up checks, approvals, and automatic masking of sensitive data. Nir Rothenberg, the CISO of Rapyd, shares how he thinks about enablement and why good guardrails let companies move faster, not slower.
Learn more about AgentLink:
Hello, everybody.
Welcome.
And we’re here on day two of AgentLink launch by Frontegg. Super excited to have you here, and we’re going to have more guests here. We’re going to talk about fun stuff and important stuff. Right? So yesterday, it was more about the fun, about connecting, about making requests from your AI platforms.
And today is going to be on basically how we can pull that off while still keeping it safe and tight and guardrailed because the AI revolution is happening so fast.
And what we hear a lot from companies is the question of, yeah, it’s cool, but how can I trust it? So we’re going to talk a lot about trust and about how AgentLink can help make it happen safely and securely. So let’s start, and I will go quickly even faster than yesterday through a bit of the general info about AgentLink. So as I said yesterday, we’ve existed for the last six years with plenty of customers. And, you know, just as I told you yesterday, the last six years were all about securing the access for leading SaaS applications, and we have controls for securing web interfaces, mobile applications, definitely APIs. There’s a full section within the Frontegg product for securing API endpoints.
And lately, we’ve been focused a lot around enabling real security for this new interface that companies are presenting around enabling AI agents to access their SaaS products.
And we saw one example of that yesterday.
Just as I explained, I’ll go quickly through that. We had this evolution happening throughout the last thirty years on how products are being used, going all the way from terminals to installable software to cloud to mobile APIs that became standard in the last decade. And what is happening now is that we start to see a shift, and your apps are being used by your customers through AI agents. And we already start to see that AI platforms and the leading AI platforms add capabilities to connect SaaS applications.
So if you think about it, just like at home, you’re connecting and performing most of your activities today through ChatGPT, through Cloud, through Gemini, whatever it is we’re using, the same will happen in the workforce, and we will see that your IT guy, your admin security folks will include all of the applications already within the chosen AI platform.
And you will enter this AI platform where you will start to basically do your job using the data that is already connected to your AI platform within your workspace. So that’s a whole different way of thinking about SaaS usage, and we start to see that it’s already happening. So as I showed yesterday, I’m gonna show you again soon.
Claude and ChatGPT are already in the game. Google added the capability to add any custom MCP connector within the Gemini CLI. Microsoft has the studio, Copilot studio, where you can get most of the things already enabled there. So and and it’s happening so fast, and we’ll talk about it in a second with our guests as well.
And today, we’re proud to introduce the safe way to open up your product to AI platforms. So today, we’re introducing agent identity and access management, and we’re introducing AgentLink analytics as well.
So, you know, step up authentication, human in the loop controls, data masking, everything that you can think of that, you know, your CISO would be worried about as you open your product to AI platforms.
This is what we handle in agent identity and access management by Frontegg. And as we will see soon, you can get it, you know, within your existing MCP.
Doesn’t matter if it was built with some library or you built it just through cursor, you know, in your development environment, or you’re using Frontegg AgentLink connector. You can have all of these guardrail and data capabilities by connecting whatever it is you have there on your back end and whatever stack you’re using for identity management, for logging, for security, for your SIAM solution, anything. So we will see how that works.
A bit of the core principles. So we worked on this product for the last few months, and this is why we’re getting this rich toolset and the great experience that we will see soon and that I presented some of it yesterday. But some of the core principles that we try to kinda enforce here is, first of all, you know, AI adoption is not happening in the blink of an eye. It happened gradually within your ecosystem, and this is what we try to enable.
So, basically, you can start by exposing your main API or some side API that you have. No problem. AgentLink allows you to do that, so it doesn’t have to happen all at once. The second thing is you get the granular authorization and control.
So first of all, we enforce the same role based access control you already have on your existing APIs. But once you really want to get it granular, you can do that without coding. No code is required here, and I will show that in a few minutes. Third thing is, obviously, we need the visibility and auditing, and we also need the capability to kinda export our logs to our SIEM solutions, Splunk, whatever it is we’re using.
So full visibility over any request that is going through AgentLink, ability to troubleshoot it, and really dive into any data that came through the request and returned as a response.
So as mentioned yesterday, you can use it if your identity provider is Frontegg, but any OpenID compliant identity provider is accepted. So you don’t have to go through this transition, huge migration.
And if you find value in it later, you can do the migration to Frontegg IDP, but you don’t have to do that. So any IDP is supported here, and you can enjoy all that value. It doesn’t matter what IDP you use.
And I think that the fourth thing is the data controls. So we hear from a lot of security people that, you know, they don’t really trust even the big platforms.
They’re not sure how their data will be used, whether it will be used for training, whether PII will leak. We heard about a few leaks that happened just recently with the postmark MCP incident and the others that we had along the last few months. So it’s still in the building. It will take time to stabilize, and this is why we chose to really take it to the next level and help you meet all of the compliance that that exists out there for, you know, for data privacy, and you can get it out of the box. So we’ll see that again in my demo soon.
So high level architecture, as mentioned yesterday, we have the clients, in this example, Bank of America, Lululemon, Walmart, that will use their custom agents or their AI platforms to access your application, which is marked on the right side of this diagram.
So Frontegg gets in the middle of things as a gateway, and all of the requests will pass through Frontegg as an MCP layer.
We analyze the request, the tool execution. And if it’s allowed, we pass it to your back end, whatever it is you’re using there, whether it’s GraphQL, REST APIs, Lambdas, or you’re read or you’re basically written your own kinda custom AI tools.
It’s a sneak peek to what we’ll have tomorrow about building new types of back ends. But Frontegg basically is there to just monitor, govern, and make sure that everything is done according to the rules, to the plans that you had. And then when the API is executed and the response is getting back to the user, we sit there to make sure that nothing that shouldn’t go back to the AI platforms doesn’t breach that policy and really is being masked or retracted, anything that you set as policies on your configuration.
So this is the diagram. Today, we’re going to extend our existing RBAC on the APIs. We’re going to set some policies. We’re going to set some data controls and see how all of that is also visible and audited. So this is the plan.
But before we start, I want to welcome Nir to hey, Nir.
Hey. What’s up? It’s good to be here. Thanks for having me.
Hello. Nir is from Rapyd.
Nir, tell us a bit about yourself, what you’re, what you’ve been up to, and I’ll ask you some questions after that.
So it all started as a school field trip. I got bitten by a radioactive, security spider. You know the story. And then I woke up. I was interested in information security. It all went downhill from there.
He was the head of security in a spyware company called the group.
Then I joined Rapyd as one of the first fifty employees or so.
Rapyd grew thanks to me. Just kidding.
But, you know, thanks to many people, it grew thanks to the spider.
Everybody got bitten by that spider. It grew to about two thousand people globally. We acquired a bunch of companies.
We’re moving very, very fast. And, to wrap it, I’m a part of the management. I’m the CIO and the CISO. So I manage all of IT information systems and security, obviously, which is my background.
So a few other spider bites along the way. Yeah. And, obviously, AI, you know, I’ve heard of it, like, last week, I think. Yeah. Yeah. Last week, I think, the first time somebody mentioned it on the bus or something.
I was just getting some AI. Obviously, it’s a big deal, and it’s a game changer. And it’s top of mind for the board and the management meetings. You know, every room you go into, every meeting you attend, there’s AI. There’s a shadow of AI, or it’s right there, and you talk about it.
You know? From the, from the meeting, you know, the meeting summary tools to to whatever else you want to the features of the product. But it’s all about enablement, isn’t it? So, you know, very excited to be here and talk more about that.
Perfect. And, yeah, thanks for that intro. I was about to ask you to tell some non AI story, and then you went out with the, you know, spider bit.
And I’m like “You didn’t say that to be true.”
Exactly.
Yeah. Yeah. So you know, I think that we have that covered.
And, you know, I’ll just dive right in. Right? Like, there’s so many topics that I would love to ask you. But first of all, I think that, you know, every CISO has their own way of thinking about AI enablement.
Some I’ve been to rooms, you know, where they kinda said not in the next two, three years. So they’re just basically saying no.
And some are really, really interested and curious to get things in, but are worried, you know, from where you see it, kinda what do you see as the real threat?
What really keeps you up at night if your CPO comes tomorrow and says, you know, Nir, it’s not a question. We have to get AI in our product.
First off, they do say that. You know? And any CISO who’s stopping AI and, you know, even in the military and the governments and, you know, everybody’s doing it. It’s a secret sauce.
By the way, sometimes it’s overhyped. Sometimes it’s not as good as people think, but the solution is always we’ll just do it with AI, you know, because it’s so magical. You know? Even if you use that app with your kids to make a video or whatever, you know, make me dance with the pope.
Boom. It’s there. You know what I mean? So it just opens up the mind.
So I don’t really believe there’s many organizations that the CISO is totally unaware of. What I see a lot from CISOs is the ability to control, and it’s happening so fast. And let me quote one of my mentors, the drug dealer from Baltimore called Slim Charles from the show The Wire.
And he and and and it’s one of my favorite scenes. Some other drug guy gets out of jail, and he’s like, oh, everything changed now. He got out of jail. And he said something like the game stayed the same. It just got more fierce. You know? And I remember being in high school watching the wire, which is a great show.
Anybody listening, like The best show ever.
The best show ever.
Stop listening to us and go watch the afterwards, go watch the wire the second we finished. So I remember, like, wow. And I remember that sentence. It’s like, you know, a defining sentence for me.
And I think that’s such a good sentence for AI because we’re talking about APIs. We’re still talking about RBAC. We’re still talking about, you know, what you were talking about. So many things are the same, but it’s more fierce.
It’s more fierce. It’s more, and it’s faster. You know what I mean? It seems like every other week, there’s a new announcement.
So Shadow IT, which used to be like, oh, with SaaS, do they go to, you know, which is already a problem. It’s not like, you know, cyber has a shortage. I mean, ransomware is still something not figured out. It’s not like, oh, we solved it all.
You know, let’s wait around for the next thing. Nothing is really solved, now it’s just going so fast. So I think there’s a speed, especially the CIO where there’s budgeting and you wanna control, you you know, and you wanna plan and you wanna make sure the the organization is going the right way, that you don’t have that you don’t have inefficiencies because, let’s say, people are using Jira and monday dot com or whatever the case is, it’s really hard to to look at it old school at the fiercer game. You know what it means?
You have to look at it a little differently. And and and I think it’s you know, I’m not a I’m not I I I think AI is great. I’m a proponent of AI. I think we’ll talk about it later.
But the speed. So it doesn’t keep me up at night.
I have a baby girl, so she keeps me up at night.
So yeah.
Yeah. I move over.
But it doesn’t keep me up at night, but it is something that concerns me. Just the the pace of new things that, like, you know, my team didn’t even figure out, and there’s a new and, again, if you’re, like, you know, a very controlled like, you’re a government office and, you know, nobody has local admin, you don’t have devs, and and, yeah, of course, not you have to open a ticket to IT to install it or to visit any website, and you have, like, a VDI. You know? But these organizations are becoming less and less. More organizations already have the cloud happening, you know, especially in a company like Rapyd. We’re a cloud first global company.
Then Are your customers in Rapyd? They expect to get, you know, experiences of AI within their products? So those are the requests that your team is getting?
Definitely. Definitely. Everybody expects to have AI.
Once there’s a reason it’s in Google search, the first thing you see. Because once you experience that, that kind of massive brain that gives you exactly what you want or close, you know, it’s really hard to go back. You know what I mean? It’s really, really hard to go back.
So that’s what it is. Even if they’re not aware of it, they’re definitely expecting it. And if they’re not in even if those who aren’t because you know how it works with the early adopters. You know, even the tool you’re releasing right now, the AgentLink, some CIOs don’t know they need it.
They don’t know. The CTOs, they don’t know what they need because they’re not there yet. They’re, like, getting the contract with Entropic, and they’re Yeah. Thinking about the best use case.
They don’t know that they need it, but then they’ll get stuck.
And then they’re like, okay. How do I get past this wall? And then there’s a door right there. You know what I mean? And these doors are priceless. And if you plan in advance and you think about the principles and you talked about this earlier. What are the principles that could help me build a door, you know, that will help me visualize, understand, take this in a controlled and managed way?
So it lands on Frontegg many times.
So a lot of you know, you’re you’re you’re saying that it’s hidden from security. And we don’t need to, you know, specifically talk about AI to discuss this, you know, tension between engineering and and security. I, you know, worked in several organizations, you know, from very small ones to big ones. And engineering wants to move fast, and they also, you know, want to use all the latest and greatest, all the fun names, all the fun, you know, frameworks.
They just go ahead and PM install whatever. Right? And voila. Somehow it gets to production.
And they’re not really thinking all the time that you know, about the security concerns. It’s not like they don’t care. It’s about moving fast and getting the task done. Right? And a lot of times, what we see is that once it gets to the security people, you know, once it gets to the CISO, it could be already too late or at least, you know, you have to handle the situation now.
And that happened before AI, and I’m sure that it happens now. So today, you go ahead and install some kind of MCP, which I think is even easier. Right? You go to cursor. You say build me an MCP, and, basically, it goes out and builds an MCP.
Or do you just go on GitHub and download something?
Yeah. Or you download something from GitHub, and everything is overhyped.
And, you know, retroactively, you kinda hear about that.
How do you kinda suggest solving that tension so that Rapyd won’t find themselves, you know, with a MCP that is exposed to incidents just like the ones that we heard over the last few weeks?
Yeah. So I don’t try to solve anything because I would go insane. I probably already went insane. Let’s judge at the end of our chat now.
But you can’t solve it. You can minimize it. You can try to manage the risk, but you’ll never solve anything. By the way, I always tell my team, we don’t wanna solve cyber problems.
We just wanna not get hit. You know what I mean? Like, it’s good for us. It’s good for us if there’s, like, cyber attacks and bad guys.
That’s why they pay us. Imagine that that would all go away. They’d fire out our asses so fast. But, but, yeah, definitely, it’s something that we see.
And there’s also the I always noticed with developers, there’s always a kind of they wanna use it, but they don’t want you to know about it. Because what happens is, like, let’s say a developer downloads Courser and pays for it out of pocket. He is killing it. He’s, like, pushing more code than anybody.
He’s getting, like, a bonus, and he’s got and they’re sending him to, like, reinvent or whatever.
And then his boss finds out. He’s like, oh, you’re and then he gives it to the team, and it’s an equalizer. So you’ve seen a lot, and, you know, this is a known issue with open source. You see it with Chef, with a lot of with Puppet, a lot of, like, kind of open source tools that didn’t convert well is because developers and and and and DevOps and a lot of these people using them they wanna raise it.
So that’s an inherent problem. Developers are curious people. They wanna use the latest technology. We want them to be curious and use the latest technology, but then they don’t want to ask your permission.
They just wanna do it.
And again and now we’re talking AI speed. Everything is fast. The code is pushed out faster. You don’t even know where it’s coming from.
Then you know, every IDE has, like, an extension. And even if you don’t use Courser and so I we the first step is to acknowledge the problem. To acknowledge the problem, not to be an ostrich. Not to say, listen.
It’s not my problem. I just manage cyber risk. You know, if the company wants to sabotage itself, let the CTO do it. That’s a very uncollaborative approach.
Right.
And I’ve seen CSOs like that. I’ve seen CSOs. I was talking to a CSO, and I and I told him and and I said, like, you know, he was telling me, like, that he doesn’t do this. Then I’m like, aren’t you afraid?
Like, aren’t you? He’s like, listen. I’m like, you know, we’re one team. Big, big company, ten thousand people, you know, or or relatively big compared to most companies.
And he said if the company wants to suck, it’s the right to suck. And I said, like, oh my god. Like, that’s you know? And, really, he’s not, he’s not focused.
So you need to be aware of the problem, and you need to be collaborative.
Your goal is for the company to provide value to shareholders, to customers, to grow profit so then you can make more. It’s very simple. That’s the mindset I think, especially executives need to have, especially people in management need to have. So you’re doing a disservice by not trying to be a guardrail, but rather or a break and rather being, or bringing a roadblock.
So, you know, the thing in cyber, and this is cliche. I’m sure many people heard this already, is that, that that, you know, the brakes let the car go faster. Right? Because without a brake, you need to go really slow because you can’t brake when something happens.
So if cybersecurity teams become brakes, they allow the company to go fast because they know cyber’s got it. So that should be the mentality.
And if you look at successful companies with successful cyber teams like Netflix, like Google, that’s the mentality. They’re like, okay. Here, this is the guardrails. This is and then I’ll break you when I need to. Now go as fast as you can.
Exactly. Setting up the guardrails. I completely agree, and that’s, you know, that if we keep everything open and we will be stopped, you know, when we speak too high and just break into a wall, then what have we done here? Right? So we want to enable the movement, but by setting all those kinda predefined rules of what we what we allow, what we what we don’t allow.
I completely agree. Philosophically there’s no philosophical point to any CISO. The AI revolution is happening. You know, cars are driving here. You’re riding a horse.
Get in a car. Get in a Tesla. It’s a nice car. Have a little joy ride.
You’re gonna enjoy it. Like, embrace it. It’s gonna happen whether you like it or you don’t like it. You know, that’s just you could tell.
There’s something you could tell. You know, the first time I saw an iPhone you know, I’m seventy five years old for anybody listening or watching. It just came about but I’m old enough to remember, you know, not iPhones not being a thing. And then you’re like, oh, wait a second.
It’s like there’s glass and you, like and it sucked. If you remember the first iPhone, it’s a horrible thing. It was, like, big and bulky, but you saw it. You’re like, oh, okay.
This is the future, and you got it. And today, you can’t even imagine, like, the old school phones. They’re just all that’s the only way that the phone. So when you look at AI and you ask him, hey.
Write me a nice email to my boss. And you’re like, oh my god. I’m like a genius writer right now.
And you’re like, okay. This is the future. So embrace it. Enjoy it. Be, you know, lead the charge.
Be brave rather than sit around, sulk, you know, be victimized yourself. Oh, there’s change. Yeah. That’s part of the fun.
It’s really part of the fun.
I have a nice quote from the wire for you later about that. So, you know, Nir, you played around quite a lot with the agent league, but I want your opinion, but I want it soon. Let’s first show the audience what it is that we’re talking about and get back to you to get some of your kinda thoughts about it. Know that you have some interesting insights.
So I will share my screen. Again, we’re going for a live demo here. So, you know, hopefully, everything works. But if that, you know, that’s fine.
And, you know, yesterday, we showed you how we take an application and basically turn it into an AI powered interface that could be accessed through the cloud in a matter of minutes. So, actually, we did it live yesterday.
You know, people told me, like, do a video. It didn’t work for Mark. It didn’t work for Elon. You know? But I said, you know, it’s gonna work for Saggy. So yesterday, the god of the gods of demos was good to me.
And we actually performed a live integration of an expense management application, and we opened it up to Claude by, you know, opening it up, opening the APIs, creating some tools from the APIs.
We had some calls that were made through Clod, through, and that’s super cool.
And, you know, I’m going back to Claude, let’s just add maybe, you know, add an expense for a billboard for, you know, one hundred k, right, from last month.
So I’m adding this type of expense, and we’re okay. So Claude is basically trying to find the right tool. We already connected the tools of the Expense app to Claude. So let’s see.
It works on the request. And right now, I just wanna show you that while it does that, besides defining the tools so we see here the tools that we defined.
We see configuration that was made with the MCP gateway up and running, connecting it to the IDP, which in this case, was zero. Just want to remind everybody. But we have no enforcement, basically. So any tool creation is allowed, and then the request to add one hundred k billboard expense should basically go through because there’s no guardrails. There’s nothing.
So that should work. We can also see the kind of the request details here.
And let’s see. It’s taking some time to call the request. Let’s see that indeed it’s working out. If not, then I’m going to ask them to do it again.
Are we going to use them? Okay. We’re okay. So Claude was, you know, scared from the feeling of using ChatGPT. So, basically, it worked. I added one hundred k billboard expense. And if we go to the UI of our application, we basically see this expense.
Now, obviously, we want to set some type of guardrails and policies to protect, you know, these kinds of things when things that are too, you know, too sensitive or their value is too high. We wanna block these kinds of things. So what we will do is we will use the access control capability of AgentLink, and let’s try to add some policy. So at first, what we wanna do is, you know, let’s ask to block anything that is above fifty thousand.
Okay? So create a policy to block any expense with an amount greater than fifty k for and, you know, we’ll give it a tool. So we only want to enforce it right now to create a new expense tool that we had. We don’t have to be so specific, but in this case, I wanna be specific.
So let’s ask it to create the rule. We’re using the AI assistant, which we worked really hard to train kinda to accept all of these requests. As you can see, it validates. It wants to create a conditional policy to block any request for this tool and deny anything with the amount larger than fifty k.
Yes. I will confirm, and it will go ahead and create this access policy.
So let’s see that it actually does that.
Okay. Perfect. So it created the policy. We don’t need to go ahead.
You know, Nir will talk about it in a second, but we will try to enable fast kind of rule based creation and not go ahead and create those rules manually, but you can also do that manually. And that’s it. We have a policy. Right? So now if I go back and, you know, let’s basically just copy that and create something for sixty thousand, you know, for one zero one billboard.
Okay. So we are asking now Claude to create an expense for sixty k, and we expect the policy to be enforced. And, you know, that’s exactly what we see here. So it looks like the expense requires administrator approval due to organization policy.
Basically, we have that blocked. And if we go back to Frontegg on the monitoring, you know, we can see that the execution was indeed blocked by policy. So we haven’t been able to add that, which is cool. But now we want to, you know, make things even more interesting.
So, you know, let’s not block it. But one of the things that is happening, and this is super interesting and, you know, Nir probably already observed it, is that once my account is connected to Cload to change GPT, and let’s say that somebody compromises my cloud account. Basically, it can perform, continue, and perform all of the activities that I was assigned to do through the connector, through the initial login. That means that if my shared GPT or cloud account gets compromised and I’m connected to tools within the expense application, for example, then an attacker can, you know, go ahead and theoretically perform actions without basically proving that they are me, right, just because they compromise this operating system account.
So what we want to do is we want to add another policy that let’s create a policy to step up any expense creation above twenty k, again, for the creation of a new expense tool.
Okay? So, basically, what we’re saying now is don’t block. Right? If it’s above fifty k, you just block it.
I don’t want to allow my AI platforms to perform such big expenses. But if it’s about twenty k, let’s step up the request. So stepping up the request basically means that we want to validate that the user is indeed who they claim they are. We’re not trusting the AI platform to perform the actions just because the connection was made maybe a week or or two weeks ago.
We want to make sure that this is validated. So we added this policy to step up the expenses above twenty thousand, and now let’s see what is happening. So now I will go and let’s add an expense for an off-site management event for thirty three k from last week.
So this shouldn’t be blocked, right, because it’s under sixty k, but it does fall under the above twenty policy that we added.
And what will happen now is that the expense has been submitted. It wasn’t blocked, but additional verification is required for this expansion. You should receive a notification via email or SMS, whatever it is that you set up. So if I actually go to my email, I can see that I received this verification to my email that asked me to go and validate. So this way, we basically get a step up, and I have to control the email.
I have to prove that I am who I claim to be and not an attacker that got control somehow over my AI platform or my kid who plays on my AI platform somehow and, you know, and just made this request or repeated something or something like that. So now I actually went ahead and approved it. And now after it has been approved, let’s see that, indeed, it gets refreshed.
Okay. So now we see the off-site management event in the expense application.
We can see it here, but the previous event, the big one for the one zero one billboard for sixty k, what was it, is not here. Okay? So we see sixty k was blocked, and we see the thirty three k is actually here.
Perfect. So this is the second option. So now we either block it or we validate it. But let’s maybe make things even more interesting. So how about you know, I’m making some request maybe in the middle between the I can approve it and it’s being blocked, and I want somebody else, like a super admin, to approve the request. So I will create an approval flow.
Let’s delete this one just to start from scratch. I’m going to create an approval flow. So approve via SMS to super admin.
And let’s do SMS right now. I will just show you my phone. Hopefully, you will see it, and it works.
Okay. And we see here that the flow will include how many approvers we want.
But in this case, I’ll just put my number.
Okay. We can add more approvals. We can define what are the roles that the approval should have. But in this case, we’ll just ask for the approval of our super admin through SMS.
Let’s assume that this is the phone number of Near, our CISO, and they have to approve some kind of request. So it’s not blocked, but we can set up some settings here. We can send reminders every sixty minutes. We can notify the requester of the decision, auto auto approve some of the some of the requests.
Right now, we’ll just go ahead, set a reminder, and ask for the decision sent out to the requester. And now let’s create a new policy.
Let’s do it here and, you know, approve via SMS anything above forty k.
Okay. So sorry. Not in the right place here. See, that’s what happens when you’re back to point and click and not using AI. But this one, I want to add manually. So approve via super admin SMS anything above forty k.
Okay. So I’ll set the amount to be greater than forty thousand. So that would be anything between forty and sixty k. And I will request for approval.
Let’s do it only to create a new expense tool. We can pick updates as well, but for now, we’ll just use the create new expense. And this should ask for an SMS approval from the super admin.
Great. So I have that enabled, and now we’ll go ahead and let’s add plans for a new office for fifty k.
Okay. So this rule should fall between the forty and the sixty and ask for SMS approval. So let’s see what is happening. It tries to use the tool.
And okay. This one is submitted for approval. The approval team is being notified, and once they approve, the fifty k expense should happen. Let’s see if indeed it sent something to my phone.
Yeah. So I got this, hopefully, so that you can see it, but I got an approval request on my phone. Let’s assume that this is the super admin that got it. And I will just open it up, and I can approve via my phone.
And once approved, the approval is completed, we should go ahead and see. Let’s actually ask for all my expenses.
So this should go to the application, and we should actually see this office as this office expenses, one of the expenses.
Some errors. Let’s try again.
So it will call the list of all expenses tools.
And here are the expenses, and we see the new office expense that was added. So we see the fifty k expense here. Perfect. So this was done.
Great stuff. We have a set of policies that we can define, and we can do that on anything. So we can say, you know, block things only from the United States or block things only that come from CheGPT or only through that model or anything as granular as you want. We can control everything.
Before we move to the next step of the demo, we have some questions from the audience, I see. So okay. So the question is, can you add more than one approver? So, definitely, we can go ahead and define approval flows.
If we edit this one, you know, we can have as many approvers that we want here.
So no problem.
Anything that you want. So one of them could be through phones. The others could be through emails.
We can also set up approvals from different roles that they should have, so definitely an option.
Okay. So now we have those guardrail policies set up so you can control the type of requests that are available on your enterprise grade MCP through AgentLink.
But now I want to address another issue, and I’m sure that, you know, Nir will also vouch for that. So remember, there’s a lot of questions for Nir after we finish. But what I want to do now is not only control the level of whether I’m allowing some actions or I’m blocking them or asking for approvals, I actually want some data protection control. So we want to make sure, for example, that no emails are coming back from the request. So let’s create a data protection policy to retract emails from users, let’s say, from users within the US.
K. So any user that will try to ask for some information from the United States, the email is basically going to be any email on the response is going to be retracted.
So we can use that for, you know, for GDPR or for other things. And, you know, again, we can see here that the policy was created. Let’s pick. I haven’t specified the tool, so let’s just use the list of all expenses and get expenses by ID tool, which should be the relevant tools for this kind of guardrail.
So let’s go ahead and add a new expense for five hundred dollars for a PPC campaign.
And in the description, add marketing at Frontegg dot com as budget owner.
And this one is from a week ago.
Okay. So now I’m adding marketing at Frontegg dot com on the description just to describe that marketing is owning the budget for this one. So we’ll go ahead and create an expense. The creation itself should go ahead without any problems.
Right? It meets all the policies. Perfect. It’s added.
And now let’s maybe open up a new chat and ask to get all the expenses.
So I’m from, you know, taking this request from Mountain View, California. I’m a US user asking for all of the expenses, and I shouldn’t see any emails now. So let’s see what is happening here.
Here are the expenses, and we see that the email address was indeed retracted and redacted here from the request. Right? So the budget owner, we see that there is nothing here. Right? And, also, on the request, we see no emails.
And let’s ask it again. Get me the email address of the PPC campaign budget owner.
So we’ll try to fool it, and let’s see what is happening.
So Claude will try to get me the email address.
It tries to think what it can do, but it says that the email address is redacted, so no email address. So we can define policies on data protection as well. Now I created a policy which is very, very specific, but if we create data policies and we go here to the data types to protect, you can see here that I can just pick all of the PHIs, so any protected health information, any PII. So we try to take a regulation based approach here.
You can pick any data item that is included within any regulation. But you can also say, I want to just, you know, be compliant to GDPR or to the EU act for AI or anything, and Frontegg agent identity and access management controls basically make sure that no data that wasn’t supposed to go out will go out. And here, I want to welcome Near back because we presented a lot of topics here that have to do with compliance, with security.
And I want to ask Nir to give me some, you know, some of his kinda views on that. So let’s get Nir, please, back to the stage.
Nir. So, yeah, hopefully, you’ve been following. It’s a lot to take. Right? Yeah.
But it’s crazy.
We were talking about speed and velocity of change, and looking at that system. You know? Like, getting the you know, this is a platform. You know what I mean?
This is like, you’re launching a new product. We’re used to, you know, companies like Palo Alto launching, like, you know, products that are, like, you know, one screen. So it looks amazing. You know, I’ve touched it.
I’ve tried a lot of it. We’re using a lot of it, but there’s so much more that you’re showing that it’s, and just one more point. Somebody in the audience asked another question, like, regarding how to set up the first rules, to spot, like, weird behavior.
So, yeah, Kate. So, Kate, my advice to you again, I’m not a sales engineer, but my advice to you is to focus on understanding what is good. What we try to do is use Frontegg to understand what is the intention of an agent or an integration and what we’re trying to do.
And once you clearly define what the good is, you can start like, the baseline, you can start, you know, looking at, like, you know, over, under baseline, start thinking like, okay, threshold, etcetera, etcetera. So define what’s good, and then look at what’s under or over that threshold that you defined. That’s, like, the basic way to start looking for that stuff. And the more you’re focused on what’s good, And, again, a system like Frontegg is perfect for that because you build you define the role. You define the permission. You define so many things.
There’s a baseline. And then once you start moving away and shifting, you can catch that quickly. So with the right system and the right mindset, it’s pretty simple.
Awesome. Great answer. And I want to welcome you know, I think it’s a great opportunity to welcome Aviad as well to the stage.
Aviad, my cofounder and the CTO of the company, and he’s you know, let’s welcome him. Aviad, welcome. So Aviad is the, you know, the security mindset behind agent identity and access management. And, Aviad, you’ve been, you know, preaching that to organizations over the last few months. Last week, you spoke at an AWS conference together with the folks from AWS about the importance of those guardrails and security once you’re embracing AI.
I would love to, you know, welcome you to the conversation. We’re talking here about trust. Right? Because it’s a new thing.
And once, you know, when we protect users with our customer identity solution for web and for APIs and mobile, we’re basically saying, you know, we want to protect against hackers, against bad guys. Right? But here, it’s not only that. So a lot of the things that we hear in Near, maybe you can relate to that, is around not trusting yet the AI platforms, not wanting some of our data to be used for training and stuff like that.
And these are some of the things that we’re trying to protect here. If you had maybe a few words, and then we’ll let Nir take his you know, give his own kind of perspective on that.
Yeah. So the lack of trust from one hand, you know, AI, you know, does mistakes.
And we all know that. Right? I’ve been asking for recipes from my keto diet, and it throws sugar at me sometimes.
But other than that, you know, because of the interface change, one of the examples I gave this week at that conference is that, you know, I’m a caller by heart. I’ve been doing it for the last twenty five years and GitHub, you know, in GitHub, there is, you know, in order to delete a repository in GitHub, you have to go through four or five or even six clicks. Okay? So that’s yeah. Are you sure? And then you have, you know, there’s another issue, and then you have to check a box, and then you have to type the the repository name, and then it goes to set step up, and then, you know, the repository has been deleted.
But if you take a look at the API of the literary repository in GitHub, it asks for nothing. So you just delete the repository. So the magnitude of failure in this case is huge. You have the replete database deletion with AI, you know, deleted three years of production database for a customer, you know, with no way of restoring it, you know, the interface is changing.
MCP interceptors. Right? Like, we even heard of Nantropix MCP interceptors recently and the new browsers. Everything is going to go through MCPs.
So I totally relate to what you said. So we don’t wanna be the brokers. We wanna be the enablers of our companies, and we wanna make sure that the company adapts because a company that doesn’t adapt to the new technology would probably die sooner than later.
But we wanna make sure that when they adapt, when our employees adapt, when we expose our products to our customers, we do it safely. So because, you know, eventually, as a builder, if I’m opening up an API and, you know, my customer connects it to AI and the entire database has been deleted, it’s not their fault. It’s my fault. I should have provided them with the goddress and with the option to connect it safely. And this is what I’m trying to bring into my conversations with people within the industry: that you wanna be an enabler, but you wanna do it safely.
Here’s some of your thoughts over data protection.
Maybe it’s just my viewpoint of the world. I don’t see it, like, as a fear thing. It’s not scary. I think MCP is the opposite of scary.
It’s what you need. You know? It’s what you it’s just like it’s a framework that lets you build controls over your AI interfacing.
You know? And that’s what that’s what’s so needed. It’s like and and then you could build gateways. You know?
MCPs are the APIs of AI. Right? And once you have APIs, you can build API gateways. You know, once you have SAML and OAuth, you could take Frontegg and take away all your authentication headaches in a second.
But before you had that, you had to write an app you’ve been doing for twenty five years. Aviad, you know that. You had to write your own stupid custom, like, a kind of database for users and user schemes and all that stuff, and then you’d mess up because it’s not what you’re meant to do. You’re not you’re not you’re not in the active directory business where most people aren’t. I mean, you actually are.
And then and then, you know, you’re a subpar product.
Most companies are not in the SAP business. They’re in their business. They’re in fintech. Like, I’m in the fintech business. They’re in whatever the consumer business. Doesn’t matter. And then they have to adopt MCP in a scalable enterprise grade level.
And if you do it correctly, they’re they’re they’re gonna win. And if they don’t, they might get breached. And just like anything with competition. You know, a lot of people are not gonna watch this webinar, and they’re gonna do it badly.
And good? Great. Because that means companies like Rapyd, they partner with companies like Frontegg, and just take all that problem away because it’s all it’s all presented to you. Companies like that are gonna win.
They’re gonna win because they’re just gonna be able to move all that faster. So for me, it’s not scary. It’s exciting. It’s just like going on a road trip is exciting. If you have a good car with a seat belt and proper brakes, then you could go on the Autobahn.
Have you ever gotten the Autobahn in Germany or in Austria? That’s so much fun. You go super fast, and you’re like, okay. Yeah.
I could have crashed, but, like, I got a good car. You know? Hopefully, you sprung for a good car. So get good cars.
Go down the Autobahn. What are you doing in front of the computer, guys? Go live a little bit. What are we doing here?
And watch the wire. Right? That’s the most important takeaway.
Exactly. So so, Nir, I want you to stay with us. And just like yesterday, you know, I asked Aviat to join not just for the lovely flowers and his amazing tattoo, which is really amazing.
You should get you know, that that deserves its own episode of its own webinar.
Yeah.
Its own webinar. Yeah. So we’ll get to that.
We can do a live demo on this as well.
So yeah. We will do that. I promise. I promise we will get to it.
I will What bet did you lose, Sofia, to get it?
Oh, no.
That’s nice. Keep it to keep it for the show. Keep it for the show.
So but, actually, you know, I would love Nir also to still stay with us, and we’ll talk to him in a second. Aviad, let’s go through some of the items kinda to give the audience a view on how we look at the security aspect. There’s so many things here that already are first users of AgentLink ask and worried about. So first of all, tenant level policies, you know, multitenancy, the big advantage of Frontegg customer identity platform, how we take it to AI as well.
Yeah. So, you know, we Frontegg is known for its advanced b to b customization.
So what you just showed, which is, you know, the team has really worked on this really hard, is the foundation of being able to define guardrails for each tool or for each, I would say, AI interface and to define, you know, approval flows, etcetera.
And the way that we kinda launched it was on the level of the application builder. So the application builder can define it.
One of the main, you know, differentiations for Frontegg is being able to bring all of these goods to the, you know, to the organization on that. And so each of the Frontegg customers will actually be able to say, I don’t wanna deal. You know, Frontegg customers are dealing with, you know, a big organization, a huge enterprise, Fortune fifty, and with that, you know, it might be another PLC, you know, startup. And each of the customers will have different, you know, security requirements.
And this is why we build these policies to be able to be, you know, for our customers to delegate.
So really soon, each of our customers will be able to tell their customers that the organization that is using their products, just define these policies through Cloj, through Chargepity as you just did on the Frontegg platform from this chat bot on the left side.
Yeah. We’re talking about delegated administration. One of the advantages of using Frontegg customer identity, our well adopted product, is that the users, the end users, could basically do anything on their own, and AI is no different. Right? If I’m logging in and I’m an admin, I want to be able through chat to add another user to set up the same policies, but not within the portal. If I’m allowed to do it, why not let me do it on my own workspace, on my own tenant? So that would be amazing.
That’s the pretty thing. I I would just say that it goes way beyond the tenant because the real power of tenant level policies, the kind of around it and the, you know, you have a tenant or organization, but you might wanna be able to define policies for a specific department or a specific team within this organization, and that’s something that is totally available with this with this granularity.
Perfect. Talk to me about the agent profiler.
Yeah. So one thing is that we discovered, you know, we are collecting tons of data from our design partners, you know, that we are using extensively.
And what we are working on is a very powerful engine to be able to kinda you know, you don’t wanna define these policies, you know, based on pure definitions of what is a risk level.
What we are building is an ML powered behavioral engine that will be able to define when an agent behaves, you know, abnormal abnormally, and that will raise the risk level.
And then we’ll you know, our customers will be able to kick off policies.
We wanna kinda balance the you know, make sure that there is a balance between user experience and not, you know, having to step up each and every time. But if we trust the agent to give him more granularity and more, you know, grace, but if the agent behaves abnormally, we’re going to collect the profile of an agent.
Each agent that connects Might be a charge p t, which we trust more, but it might be, you know A custom agent or yeah. Customer agent that is trying to harm, you know, our customers, and and and this is where we step forward and we say, okay.
We’re gonna block that based on the profiling of this agent.
That’s that’s that’s great.
Leveraging Yeah.
It leads to the next point, which is the, if I’m not mistaken, the credit score Yep.
Which is, you know, zero trust agents.
It’s funny to talk about zero trust again, but it is zero trust agent. We, you know, we give no benefit of a doubt for any agent, and we keep learning how agents behave.
And an agent is not an interface. Right? You can connect and try to impersonate ChatGPT, and we know how to detect it as well.
And then we can trigger automatic verification for this kind of agent based on the credit score. So as you know, Sagi, you moved to the state, like, three years ago, was it?
Yep. In order to buy a car, you had to earn your credit. Agents will have to earn their credit with Frontegg as well.
And we have some very interesting technology behind that. You know, we won’t get into details here, obviously, but very interesting on how this is done. It’s not straightforward. There’s new approaches there, a combination of LLMs and other stuff.
So, wow, super excited to see that coming soon. And let’s talk about the automated context based policy. Right? Understanding actually what is happening behind the scenes, right, for the user and then deciding what to do.
I think one of the, you know, the inputs that we got from security folks like Near is that, you know, they wanna complete you know, they wanna buy a system and never touch it. Okay? They want to act with this system as another layer in their security stack, and that’s pretty much it. So we are working on a very advanced context based policy that is leveraging everything we we just talked about on the agent profiling and the credit score, but being able to determine a risk of an action or risk of a tool called based on the context of the action, the time, the location, and the risk score of the user and the risk of the agent.
So if you think about it, we are going into a complete automated policy. So you don’t need to define anything. Okay? The system will define stuff for you and will act, you know, will move forward with security and will move backward for users and agents that it trusts. So so so, yeah, that’s that’s the end vision for this, you know, entire analytics and I’m for agents.
Aviad, you know, we’re we’re we we understood pretty quickly that we should be the ones to enable this interface as well. Right?
And over the years, the team has built twelve, right, correct me if I’m wrong, proprietary security engines for protecting web interfaces, APIs, and mobile.
How many of those could be actually used, you know, for protecting this new AI interface?
So I think, you know, we are leveraging a lot of the technology that we’ve used for this, you know, login operations and refresh operations.
But you kinda need to bear in mind that different training is kinda needed because, you know, an agent we don’t need is an agent only with a login. So we met an agent on continuous operations.
And what we are detecting now is, you know, we even have a case of trying to identify when an agent on a multi agent orchestration layer, agents are sharing tokens between themselves.
So, you know, these are the things that we’re really trying to, you know, to leverage the technology that we already built, but just apply it on agents.
That’s perfect.
Avia, thank you so much. It’s so exciting.
And let’s maybe have Nir back with us.
Nir, there’s another I. I promised you another quote that I really like, and it’s also with the there’s a lot of talk in the wire about the game. Right? And one of the quotes that I love is that, you know, the game is rigged, but you cannot lose if you don’t play. So you know? And that’s not your approach. You’re you’re you’re here for playing. Right?
You’re saying that you have to play.
You have to play. You have to play. And by the way, you can lose if you keep playing. Because unlike drug dealers, thankfully, I think most of the people here are not.
And if you are, you know, I respect that. But assuming most of us are not drug dealers, this is what’s called an infinite game. And, you know, as long as we keep playing, then then then then we win every time we keep playing. That’s the goal.
The goal is just to stay in the game and to make gains as you stay in the game, just to keep going every day again and again and again.
And and and, you know, you need partnerships to do that. You can’t do that on your own. There was a question in the audience, if I may, about things CIOs aren’t aware of.
The thing is, you know, we take a lot of things for granted. For instance, today, APIs are so prevalent. And it’s like, oh, it has an API. It has an API.
And we take it granted even good APIs even though a lot of APIs still suck. But I remember back in the day, they weren’t an API and, like, everything was closed, and you just couldn’t connect to that. And you couldn’t and if you wanted to even make something in Windows, you would have to be what’s known as a Windows internals expert. Remember that?
Remember, like, oh, he’s a Windows internals expert. Today, nobody speaks like that because Windows has an API, and everything has published an endpoint where you can connect to.
And so today, CIOs take that for granted. But then they look at AI, and they don’t understand that it’s not like that. So then a CTO or a or a board or a management team will come to the CIO, CTO, whatever, and say, integrate. If you know, we wanna be AI. And you know, and they don’t know what they want. They just see what’s happening.
They wanna use it. They know their customers are gonna expect it. And the CTO or CIO’s job is to put the infrastructure in place to enable that when it comes or if it’s coming to run forward. So when you start this journey, you don’t know.
And even what you guys are creating right now didn’t exist, like, before you created it. It doesn’t exist. It was a question you know, there’s other tools. Yeah.
There’s a lot of tools. None of them existed until, you know, six months ago. Some are good. Some are bad.
What’s interesting about Frontegg is the approach that you take the mindset of RBAC and of and of access management and of and of directory as a service. And that mindset, which lets you manage thousands of users, yeah, is the mindset that I think is needed to scale quickly with thousands of agents. And now it’s not thousands of agents. Now it’s tens of agents. But we all see that in six months, a year, there’ll be a little shift, and we’re gonna have it and it’s coming. It’s coming. It’s coming big.
Rapyd, we’re talking about multi- agent architectures where you have one agent talking to two agents, talking through an MCP, unmanageable. Totally unmanageable without a partnership. And the CIO who hasn’t tried to implement that or figure that out isn’t even aware that he needs to. And I think we talked about education.
I think education is used, but people who don’t get educated will, you know, walk in Southside Baltimore, in the wire and get educated. You know what I mean? They’ll be in the game because this is a game you gotta play. If you’re at least if you have a company that needs any to use the Internet.
You know?
Yeah. That wants to stay in business.
Amazing. I love that. You know, we’re enabling the game, and I think that this is why we’re here.
Nir, that’s so insightful. I love talking to you, and thanks a lot for, you know, for trying out and playing with, with AgentLink.
You’ve been doing that for a while, and would love to get your, you know, your insight, your insights and, Aviad, I can’t wait.
Aviad, work harder. I can’t wait to get my hands on these greedy little hands on all these features you’re working on. It looks really cool.
It’s always funny.
Yeah. Fast releases, fast work. That’s what, you know, we’re here for.
Hook me up, man. Hook me up like the wire.
Right? Exactly exactly. Exactly. And, you know, guys, thank you so much. That’s day two. Tomorrow, another exciting thing that we’re releasing.
Stay tuned. I cannot wait. It’s going to be a bit different than what we had yesterday and today, but not less exciting.
Cool stuff. For builders, that’s a little bit of a sneak peek into it. And I’m gonna see you tomorrow. Abiad Nir, thank you so much. And join me tomorrow for AgentLink release day three. Thank you. Bye bye.
Bye bye.
The Complete Guide to SaaS Multi-Tenant Architecture