Video: AI in FSI: How to get started

In this video, Nate Buchanan, the COO and Co-Founder of Pathfindr, provides practical advice on how to get started with AI, which is to first understand where AI can have the most impact on your organisation, with the least amount of effort and risk.

Setup a call with Nate

Please submit this form, if you would like to setup a call with Nate.


So at this point, you might be thinking, how can we get started? There's a lot of different ways to get started. This is a very, very simplistic view of the way that we like to do things at Pathfinder because we find that it really helps clients get their heads around how to go from where they are today to where they wanna be from an AI perspective. And some of you may may already have experiments in the works.

You may already have people enabled on AI, and that's great. But for the most part, what we've seen is either folks haven't started at all, or if they have started, it's been very ad hoc. They might have given some team members licenses to something like Copilot, or they've allowed some of their developers to experiment with GitHub Copilot. But they there hasn't really been a top down or bottom up effort to understand where AI can make the most impact for the organization and then roll it out from there.

So what we like to do is start by figuring out what users need. You know, we took a look at some of the Gartner use cases earlier. We found that while that can be valuable in some cases, the best way to figure out where I AI can make a difference is to start by talking directly to the practitioners, and by the team on the ground. And not by asking them where would you like to use AI in your job, but instead asking them, how do you do your job today?

What systems do you touch? What are the pain points? What are the challenges? What's annoying about it?

What do you like about it? What if you could change one thing about the way your job works today, what would you change?

We've done that for a couple clients, and it's been really interesting because we found that in a lot of cases, the results we get by by asking these questions, and we do this via a survey that's very low touch. We're not taking people out of their day jobs and putting them in a room to gather requirements from them because everybody hates that. We found that we've get gotten some insights as as far as how people do their jobs and the pain points, that they experienced that was not on the leadership team's radar at all. And and they were very surprised and very grateful that we were able to extract that information. Because by doing that, we were then able to say, hey. Let's focus on this particular area because we know AI can help here, and we also know that it would make a big difference to your team.

So step one is figuring out what the users need and then starting to prioritize, okay. We wanna focus on this use case first.

Then the next important thing to do is to build something, even if it's imperfect, and get it in the hands of the users that are gonna be using it every day to make sure it's fit for purpose and get their feedback. We found that the best way to get value from AI is to build, test, improve, build, test, improve, and iterate like that very quickly so that you can go from a prototype to a hardened pilot, in some cases, in just a couple of weeks. By doing that in that way as opposed to waiting for weeks or months or however long to deliver a finished product to the users, you're able to gain a lot more benefits and a lot more feedback more quickly so that you're able to ultimately end up with something more robust in production and potentially months faster than you otherwise would have.

And then the third step is to, take what you've experimented and iterated on, roll it out across your organization, teach your people how to use it, and continuously improve it. The one thing about AI that is really cool but also really scary is how fast it's moving. So it's important that if you put something into production, even if it's adding value right now, and that's great, it's important to continue to keep tabs on it to make sure it's continuing to add value and also that there isn't a new solution out there that could somehow be integrated with what you've already built to add even more value or help your team keep up with the capabilities that are out there. A lot of you probably know that LLMs are notoriously fickle.

If you've built a a solution that relies on output from an LLM to generate some kind of some kind of content or response to a a user query, oftentimes, you can't predict what that, output is gonna be. So keeping tabs on the user experience and the quality of the outputs that are being generated is very important to make sure that people continue to use it and that it continues to add value.