Did you find us okay? Yeah, it was a straight T9 train for me. Oh nice, okay. Easy, cool. I heard cardio, because you do cardio, don't you? Nice. I don't mind if I buy this year. Go for it. I might need to get some. Okay. I have my own markers. Oh, okay. I don't think you have any markers. No, I have my own. I have a notebook to scribble on my own thoughts. If I want to just demonstrate something. Okay, cool. No worries. That sounds good. Cool. It's exactly nine. Yeah. Perfect. Okay. So thank you so much for coming in today to meet with us. Just about our analyst role. So yeah, in terms of our interview structure, we have an hour today, but we'll probably spend about 45 minutes or so asking some general questions, some behavioral questions, and maybe a few sort of technical questions as well, just sort of based off your experience and what you sort of worked on in the past. So I'm Amanda. We've spoken previously. I'm in the HR team. So yeah, I partner with our technology and some of our actuarial teams as well. So yeah, I've been at Affinity for about eight, eight and a half years. So yeah, I'll let Rafiq introduce himself. Thank you. I'm a consultant at Affinity. I've been here for about three and a half years now. Background is in actuarial studies. But I really just found a place in the data science product analytics space. Started at Quantium as a grad for about three-ish years and then moved here to Affinity to help kind of bolster up their product capabilities. So I kind of handle delivery for two of our main products, which are Punt, which is a commercial underwriting workbench. Sorry, say that again. Which one? Punt. Punt. Okay. I haven't heard of that one. I took a look at Finperils, Finpoint and Tag. Yeah, yeah. So those are some of the other products we license to clients. But Punt is something that we initially built for Suncorp specifically. The white label version of it is Unirider, but that's on our website. Yeah, I saw Unirider. I didn't understand that, but I figured that's probably what the job's about. And then Rebuild is another one, which is like a commercial calculator that we license to some general insurers. But yeah, my focus has of late been in the sort of AI space, augmenting features in the solutions using Gen AI and all the advancements. Excellent. I think, cool. And did you want to give a brief intro? Yeah, so I'm Ayush. I got my degree handed to me by the Pro Chancellor last Monday. So Bachelor of Science, Computer Science, Artificial Intelligence. So I've got that kind of really theoretical baseline of machine learning, deep learning, artificial intelligence that I learned through coursework. But also I've got my own side projects and things like that that I'm particularly interested in. Did you guys get a chance to look at the CV? Yeah. Did you take a look at the, did it have red in it, the color red? No. Good, good. Because this is like a three-page one? Yeah, I gave him the longer one. Excellent. I'll ask you a couple of questions today about the projects. Yeah, please. I also took the liberty today, actually yesterday, to do a credit card fraud detection. Because I was studying Finity and I haven't done like deep learning or machine learning in maybe like three months. And obviously the role is for an analyst in software data and AI solutions. So I figured it'd be worthwhile to do a bit of supervised learning and get back to scratch. So I actually produced a Jupyter notebook over a course of 90 minutes last night. So I timed myself. I did a bit of, you know, map plot lib, just correlation stuff. And yeah, like this is actually just for you guys. Yeah, yeah, yeah. Happy to take a look, man. Yeah, but the rest of the projects are kind of things that are like passion projects that I've done on my own accord. Like peg solitaire. This is a puzzle that's set on my family's table for like five to 10 years. And then like I did it as a child, but eventually you learn how to code, you know, and then you run it through Python and then you make depth first search. So these ones are the public ones. And I think these are some of the ones that you've seen. How would you kind of describe yourself in broad strokes? I'm curious. Yeah. I'm curious. I get what I want. I just do it. Like I'm very proactive. I read a lot. I study. I drink so much coffee. Yeah. It helps cognitively. And yeah, like I'm in love with knowledge. I would say I'm a philosopher. Yeah. Philosophy comes from two words. It's philo and then Sophia, which is love of wisdom. Yeah. So I think I have a painting of Socrates in my study, the death of Socrates by Jacques-Louis David. I love the art, by the way. I got here early because I couldn't sleep. I was so excited. So I went at like 8.30 and I was sitting coffee and looking at the local artists, which was pretty nice. I play sport as well. What do you play? I play ultimate frisbee. Oh, nice. With Katja. Oh, yes. Okay. So that takes up a lot of my time. Because mens sana in corpore sano. So a healthy mind and a healthy body. Yeah, I really believe that. Definitely. Amazing. And look, I guess I think you've done a lot of research about Finity. And yeah, so I think I feel like you know a lot about what we do and the products and things that we have. But what really sort of attracted you to this role and why? Oh, Katja said it was good. I trust Katja. Katja is a pretty smart person. Like she told me this riddle once at like a lunch we had like two years ago. And I still tell people about it. Like I like Katja. And she's like, take a look at Finity. And when the listing came up on SEEK, I just applied. Because I trusted her gut the way I do on the field. Yeah, yeah, yeah. So is it aligned with your skill set? Yeah, but beyond that, the finance stuff is kind of cool. I'm a mathematician, so I took real analysis this year. I took higher statistics and probability. One thing I... So did you guys see the master of stats? Yeah, yeah, yeah, yeah. I wanted to discuss that with an employer because I worked hard this year to take higher undergrad level courses from the computer science stream. I had to get permission and they really had to see me excel in it. But eventually I got through the rigorous trauma. But it's pure math. Yeah, yeah, yeah. But I'm sure you've done a little bit of stochastic analysis. Yeah, yeah, that's part of the course. But not at a postgrad level. Which is what I'm thinking of doing because you talk about artificial intelligence, which is... So AI, would you agree that it's mainly machine learning under the hood? Yeah, yeah. It is, right? It is. It's so... The concepts that are kind of underpinning what's hot right now, you pick that up at a base level at university. You would agree with this, right? So this is AI, right? And then this is ML. And then this is DL. And then your Gen AI stuff would be like down here. Do you agree with that? Yeah, yeah, yeah. Because I have a thesis that all machine learning, so machine learning, is actually congruent to statistical analysis. Yeah. And that's where my quest now, postgraduate training in statistics comes in. Yeah. Because I want to be an expert at this. Yeah, yeah, yeah. But I'm not sure how that lines up with an employer's kind of side of things because obviously it would take time to study it. Yeah. So whether I defer or... Because this is an analyst role. I think it graduates to a consultant role. Yeah, yeah, yeah. Down the line, it's like... Yeah. Yeah, so it's an analyst role. So it's very much sort of like... It's like a very junior level. I think, to be honest, I think you're at a level right now which is quite like really high level right now. Like I think the work you'll be doing is more sort of grounded in... Oh, I don't mind. ...like a day-to-day job in the sense of like it's less theoretical. It's more kind of like at times it might be a bit boring to be honest. I'm looking to start my career here. Yeah. And so the thing is it's an investment on your part. And I just want to kind of let you guys know what you're getting into. Yeah. Like where the ceiling is and where we're at kind of. Cool. Yeah. So I guess like yeah, I guess some context like we're an actuarial consultancy, right? So we work with insurers and clients in that space to help solve problems, right? Some within the sort of traditional insurance domain. So things like valuations and pricing and things like that. So that's fun to me mathematically. Yeah. And others from like a non-traditional standpoint like if they're a problem like how we can sort of use different techniques to solve it. And this is where like it could be a thing that uses decision algorithms, something that uses machine learning algorithms, stuff like that. So but what attracted you to affinity and like the insurance consulting space as opposed to like a pure tech shop where some of the theoretical stuff that you're talking about, some of your sort of interests in this space might be better suited. Yeah. I mean look, I'll be honest, I've applied basically everywhere. Yeah. I've gotten two interviews in the past two months. One with Updox, I think they're based in Epic. It's like an online kind of bulk billing thing. And they were concerned that I didn't have any internship experience. Right. So obviously a lot of this theoretical knowledge, it did come at the cost of a lack of internships. But it's up to you to make that decision whether it's worth it. You know, I do think I can work cohesively in teams. I do play team sport about three times a week in high pressure situations where things do matter. And communicating to coaches, selectors, your team players, I think there is a degree of similarity between that and a professional setting. Yeah. But yeah, no internships. So Updox didn't really like that. And then I went to Fusion 5. These guys are more of a generalist tech stack solutions. I think they advertise themselves as a, what do they say about themselves? It's okay. So why the insurance sort of consulting space? Well, like, it's just a gain experience. It's like, it's just real negative. There's nothing wrong with you guys. Like everything is great. You know, you let me write code. Oh yeah, there's so much stuff here. Let me tell you why. Okay, well your projects are really cool. You've got 16 projects that kind of step out across geospatial climate risk models. I quite like those. Rating engines are cool. And the cross-functional aspect is one thing that really, really makes me smile. Like I would be excited to come to work because there would just be depth and breadth to working at Infinity. The growth and learning, obviously it seems like there is professional development here. And impactful work. I quite like that as well. It seems like you guys do make a difference. I think I read somewhere there was maybe even some pro bono work that you guys do. Yeah, from time to time. Which is pretty cool. So I think, yeah. Cool. Yeah, sounds good. Okay, cool. And I guess, what would you be looking at developing? What skills are you looking at developing? So it might not necessarily be technical and learning in that sense, but more maybe like your... I'd like to navigate the client relationship that you guys seem to do so well. It's like one of your first values. It's like we go clients first. And I'm like, that's what you should do. It's all about the client. That's really good. But I'd like to hone that skill to really be able to, when somebody comes to us with a problem, can we precisely address that problem? And keep them happy. Which I think is a skill. So I have to practice. Anything else that you want to maybe learn? Well, the industry is very different. I'd like to learn the industry. So at Fusion 5, I got rejected on a third round interview. It was a bit of a kerfuffle in the sense that I'd already passed the technical interview, but then one of the directors was sick and they threw me in the room with the technical lead again. And he's grilling me about polymorphism. And I'm like, I haven't done this in like a year and a half. I don't remember the difference between an abstract class and an interface. And so, yeah, I didn't get that job. But now I know. Now I know. I went back home and I studied. But yeah, so it seems like, because I don't do a lot of object-oriented programming at home. I don't need to. I open Python. I use an interpreter. I write like two while loops and a for loop and I'm a happy man. Or if I'm doing artificial intelligence, I just use sklearn. I don't think that's industry standard. Yeah, I think a lot of our sort of engines are built on Python. So that's really good that you sort of... Actually, one more thing that I do like about Infinity, the certifications. I have a couple that I'd like. And if you guys kind of do, you know, it says build skills through structured learning certifications and mentoring. That's definitely like a box tick for me. Yeah. Maybe we can move on to the situation. Let's do those ones. Yeah, perfect. Okay. So I think we just wanted to ask a few situational questions. So these are really sort of how you might approach a situation. So if you've had some experience in the past or like an example that you can share, that would be great. But if you haven't experienced it before, just maybe what you might do and how you might approach a situation is perfect as well. And it can be through uni and stuff like that too. So could you tell me about a time where you learned a new tool, language or technology? What prompted you to learn it and how did you go by it? Yeah. So in Latin we have this saying, arteficia doquit famis. It means ingenuity out of famine. And so oftentimes I do find, like I learned Lua because I needed to configure my NeoVim configuration. Yeah. Like you get better and better at picking up the parts of a framework that you need. Like I wanted to build this timeline to visualize some of the greatest people ever, like Socrates, Jesus, Caravaggio, like all of these. And so I learned a bit of SvelteKit. Yeah. You know, just understand the way that the components work. how to integrate TypeScript within there, how to at least architecture it on a higher level, like what should be the different pieces, and then you can let the large language model fill in the rest. Because these days that's a privilege that we do have. And it makes learning even a little bit, you don't have deep mastery, but hey, I have a web app that exists on the World Wide Web. Nice. So tell me about a time that you had to explain a technical or complex idea. I feel like that's something that you're really good at explaining, like complex ideas, but to a non-technical audience or in a simple way. Of course. How did you make sure that the other person understood it? Sure. So I like to teach. I've been a tutor since graduating high school. And oftentimes I do find myself in this situation. Whereas like some kid's parent is just paying me money to teach this kid. But this kid could be dumb as rocks, you know. No offense intended. Like we all start somewhere. No one comes out of the womb knowing that pi is equal to 3.14159. 26535. 89793. 2384. I'm just kidding. Anyways. And so learning is really predicated on this one concept called elaborative encoding. Have you guys heard of that? No. Okay. So it's basically where, and this is the hard part about teaching. You need to probe and understand what that person knows. And then you need to connect the thing that you're trying to teach them with the stuff that they know. Because otherwise it's not going to stick. Does that make sense? Yeah. So for example, like take the word nocturnal. A lot of us know what the word nocturnal means. Right? Rafid, you know? Yeah, yeah. Do you know what the word diurnal means? No. That's a bit of a new word, right? Yeah. Nocturnal means you're awake during the night. But diurnal, it seems foreign, but when you connect it to nocturnal, it's like, ah, okay, so you're awake during the day. And so suddenly elaborative encoding occurs. And so I think that's one of the techniques I use. Because obviously it's not cut and dry. It's not elaborative encoding as a solution. You should try Kanji. I'm not sure if you've ever written any Chinese characters. But there's not a lot of connection you can do. You just got to sit there and you got to write it. It's the same thing with sometimes factorizing, doing binomial expansions when you're a kid. It's like there's not a lot of elaborative, just do it. You just got to sit there and do it. But yeah, when it comes to explaining things to a non-technical client, I would really try to understand what they know about the problem and distill it in terms of analogies. Yeah, yeah. Analogies and maybe extend the stuff that they do know about our technical field. But obviously sometimes that bridge can be literally the difference between an undergraduate degree. And you can't do that. So there will be, it's lossy, it's lossy transfer of information. So for example, have you heard of AWS Bedrock? I've heard of AWS, but not Bedrock. Okay. What's a sort of technical concept? Is it like DataBricks? So okay, with modeling, right? Yeah. How would you sort of explain in a non-technical way what the train test, so train validation and test data set? Ah, yeah, good question. Think about if you're explaining it to someone like me who wouldn't have the foggiest of an idea. Like I know what modeling is at a very high level, but like, yeah. Okay, yeah, I can do this. This is a good one. Okay. Yeah. Do you mind? Actually, I'm out of coffee. Because that's the good coffee. I've got the bad one. Let me make this a bit harder. You can't use any props in this situation. You have to, but everyone's asking like... But they've got to have a problem. Like they've got to maybe, maybe we solve the credit card fraud detection. No, for example, they asked like, oh, what's a train data set? Okay. What's a validation data set and what's a test? What's the difference between them? Why do we have these things? Okay, sure. So, Amanda, imagine you have a data set, like a credit card detection data set. So it's got lots of rows, as a matter of like say 200,000 of them, right? All of these are transactions, okay? Now, this whole thing, this is your... You could use this as your training set, right? But then your model wouldn't really have anything to test on. How would you test how good it is? You don't have any more data. Do you get that? Like so you've got all of these rows and we basically, we used all of it to tweak the neural network. And so what we do is we train on this much and then we test on this much. And that's how we know how good our model is. Yep. Cool. Now, slight caveat. There's just one more division, okay? And that division is when we make a model, sometimes we want to test lots of them, right? Because we've got a wide variety of really good models, right? We've got decision trees, we've got neural nets these days, we've got support vector machines. And what we do instead is we take this data set and we just take a little bit of fat off to adjust the parameters because they're called hyperparameters. And so that's what we use this validation set for. Do you understand that? So like, so support vector machines here, neural nets, but all of them have like these special, like maybe we have a regularization parameter. And so we can use this validation set to adjust that and get a really good model. And then we test that whole model on this testing set. Cool. So those are the three divisions. Yeah, nice. Right? Good. Thank you. Right. Look, I think... I hope that wasn't patronizing. No, no, no, no, no. Not at all. No, it was good. It was good. So have you had a time where you challenged an existing process or suggested a more innovative idea on how to do something? And how did you approach it and what happened in the end? Yeah. So I actually find this to be the case oftentimes on the field. It's like, because I'm pretty quick, so I have a slightly different playing style. And sometimes I clash heads, you know, with the rest of the team who wants to play a little bit of a different tempo and things like that. And you know what? That's okay. Like it's a team sport, right? Where we're all just trying to harmonize. And if it's 6v1, then hey, it's 6v1. You know, like AJ's got to adapt. I go by AJ on the field. Yeah. So, and then in team situations as well, like, because we've done a few group projects at university. Usually my ideas do stick, but sometimes they don't because of the activation energy. I'm somebody that uses a computer with a split keyboard. I type in Dvorak at like 120 words per minute and I have like terminals just spawning everywhere. So that kind of a workflow is like not something that I can impose on other people. And so it's like, but you know, like the first two years of your undergrad, you try. You try to convince everyone this is the best thing in the world. But it doesn't, like you can't really impose that. And so you need to, you need to either meet people in the middle or meet them all the way or you need to compromise as well. At least in my experience. Yeah. So it sounds like, well, what are the specific steps you take to maybe challenge that existing process? Like how do you, what are some soft ways that you can influence somebody? Like is it, for example, in a client meeting, if they want to go with one sort of decision, but you strongly feel about the other decision, right? And how would you kind of softly kind of guide them to sort of... You know Socrates did this back in Athens and he got sentenced to death. That doesn't happen anymore in a workplace. I can confirm. No, I'm just kidding. But the thing, so what Socrates used to do, right, is he would go to a noble person, he'd ask them, what is justice? And they would give him an example of what they thought justice was and he would just find like a flaw in their reasoning and then he would kind of point it out. I think I'd probably try a similar approach in the setting. It's like, hey, okay, let's imagine we deployed this. Let's deploy this and then, okay, cool. What if we do a thousand transactions? Okay, what if you do 10,000? It's fine, right? What if you do a million? You think it'll work? Because like say, for example, I'm suggesting we shift to a microservices architecture as opposed to a monolith. And then it's like, what about concurrency? What if two people access it at the same time? What happens? And I think slowly, of their own volition, they'll start to see that their idea is breaking down. Because if I feel strongly about a particular thing, then there's got to be a reason why. There's got to be some kind of architectural things that I've seen as a professional that maybe the client has overseen. Because even when you're tutoring kids, you can't force them to study. You've just got to kind of trick them into thinking that studying is the right thing to do, and then they do it. So as long as the client's like, oh yeah, they're right. We should use this other approach because I agree. As opposed to, again, you're just jamming it down the client's throat. Be like, do this because I said so, and it's right because I know better than you. No, that's a really good point. I think even if you kind of have this certain impression of the client, you obviously can't say you're completely wrong. So I think an evidence-based approach, highlighting the pros and cons of the path they're taking, and then sort of doing the kind of scenario testing, scenario analysis on that is something that really helped kind of validate your position. But ultimately, that's all you can do. There's kind of limits to the level of influence you have. Ultimately, if they want to go with another sort of pathway, you do need to support them through that. So obviously, you're quite knowledgeable. That's the impression and that's your sort of branding coming through quite strongly. What's an example of how you kind of keep up to date with trends in the space that you're kind of passionate about, like software, data, AI, and how have you kind of applied this to your learning? You know, I want to say something controversial. YouTube. Okay. YouTube. Yeah. No, it's like the game's changed. Like learning isn't the way it used to be. I mean, I read a lot. So I will find out about the paper on YouTube, and then I'll go read the paper myself. But yeah, YouTube does. Because if somebody bothered to publish a video, right, like that took time. Must have been a pretty good idea. Well, not always, but at times. And then obviously, you know, I just went to university. So I still stay affiliated with the university. So at the very top over there, that's training on UNSW's Katana cluster. Oh, really? Yeah. I've done a lot of training on NVIDIA A100s and H200 chips. So I know how to parallelize machine learning models to really accomplish non-trivial tasks. Anyway, so I'm bragging. Sorry. What are we talking about? How you keep up to date. So YouTube helps. And so what's an example of something that you've kind of learned through that platform that you're applying to your learning? Well, the AlphaGo paper. Yeah. Like that one's, it's kind of old, but it's like really important. And yeah, the Transformer paper. Yeah. UMAP, uniform manifold projections. Adam, the Adam optimizer. Yeah. A lot of papers. A lot of just architectures as well. You know, like, you know, mask recurrent neural networks, RNNs. And so all of these things, how is it kind of, like, what are the sort of, in sort of non-technical terms, what are the things you picked up that you're like, okay, this is cool. Let me try and sort of apply this. And what was the outcome of that? Yeah, yeah. So one of the better applications, I think, was the UNET paper, which is an encode. Yeah, so tree segmentation. So I think you guys might like this one. So we actually implemented as a group at university an encoder-decoder architecture, which uses convolution to downsample and upsample through basically satellite images of trees. And we need to identify the masks that are the dead trees. And that's like a very real application of just UNET, which is an architecture that I learned basically through YouTube and the original paper. Yeah, so this one for sure. Cool, cool. Because it explains, like, even if you don't understand, like, one concept, you can just watch another video on convolution. Yeah. And then it'll, like, explain, like, okay, this is how the kernels work. This is how you get, this is what the calculations are. And then you write your little flashcards. Yeah, yeah, yeah. Definitely. I think it's supplementing your learning through many different sources. Yeah, you've got Claude open in one tab. You've got Google in the other. Oh, absolutely. Digesting volumes of information. It just speeds things up super quickly. Okay. Did we want to ask a few questions? Yeah, yeah. Happy to jump into that. Yeah, yeah. So I had a look at your resume. Quite impressive. Thanks, man. And one thing that stood out was BookBot. So a chatbot for public domain books using embeddings and PGVector. Can you walk me through the end-to-end design from a new book arriving to a user asking a question and getting an answer? Yeah, sure. So the BookBot was, do you know Andre Carpathie? Yeah, yeah. Okay, good. So Andre Carpathie was one day... I'm glad you know Andre Carpathie. Anyways, so I'll be honest. I vibe-coded a lot of this. Yeah. And the reason for that was because Andre Carpathie during one of his videos, he said that nothing like this exists yet. And I just wanted to know if it was possible to build it. Okay, sure. Now, obviously, you can't just leave an LLM to just build something itself. It would take a little bit of oversight. So I do know in broad strokes the design. It was built in TypeScript. Yeah. We used React on as a stack. What else did we use? We used PGVector to store the books. Now, the thing with the books is they've got a very interesting quality about them in that the text is too long to put into an open AI API key. You can't just copy-paste the whole book in it. Because it's charging me a lot of money as well. So we've got to do some kind of re-embedding. So if you kind of consider the book as having text, so basically there's a script that re-embeds the dimensionality of the book into a more... It uses some kind of principal component analysis. I told it to do the PCA. I don't know how the PCA was done. Right. Yeah. So it's just such a huge project. I wanted to get Stripe payment working in there as well. So it's not a thing that a solo engineer can actually do. But beyond that, yeah, so it runs up on my Volta web server in Melbourne. And yeah, it's got authentication. I built a lot of the authentication myself because I took a little bit of pride in the work. So I configured the OAuth tokens. Yeah, nice. And yeah, just storing the passwords carefully, working with the JWT tokens as well, and then configuring the endpoints. That was kind of tricky. Because it's got Google authentication. Did you manage to sign in? Okay, I just had a quick look at it. Okay, yeah. I got almost more distracted with the icons. So the icons are quite beautiful. There's kind of a theme that goes along with all of my work. So Amanda, have you seen this one? No. Does that look familiar? Okay, so that's from my website. Oh, yeah. And these are my business cards. Yeah, cool. Okay, cool. Yeah, so you've got TypeScript, React, PGVector. Yeah. Got a sort of web service and an authentication that kind of masks it. I would like to rebuild it, though. Yeah. So I bought this book recently. It's actually about doing this from scratch. So I plan to, within the next 10 to 12 months, rewrite the whole thing. Yeah, yeah. And get it working when I understand all the internals. Because, again, this is kind of a minimal viable product. I'm actually not even sure. I need to remove that from there. Yeah. So what kind of helped inform the user experience and the sort of journey? Yeah, I kind of had high school students in mind, you know, like public domain books, like 1984. Yeah. You just want a couple of quotes. Yeah, yeah. You want to ask, like, you know, did this person die? Like, we've got, like, markdown formatting going. Yeah, nice. Yeah, that kind of stuff. But it's, like, really buggy. It was kind of my first experience with TypeScript as well, which is probably not the best way to try to learn TypeScript with such a monolithic project. But, you know, you do pick up some stuff on your IDE. It's just throwing so many errors at you. Yeah, yeah, yeah. Because you're trying to use JavaScript kind of style in a type-safe environment. Yeah, yeah, yeah. Cool. The other thing that kind of stood out to me was the kidney tumor segmentation project. So how did you design, train, and evaluate your model? And what are the key challenges you had to solve through that process? Yeah, that was... Okay, so this probably does relate to the question Amanda asked previously about, like, me trying to get a whole team to kind of sort of lean over into my direction. And I really had to... that my approach was the best way. So I used something called an N-Unet for the kidney segmentation project. And so that stands for No New Unet. Because the thing is, we, as university students, we are not experts on kidneys or kidney tumors. And when I first started, I don't think I had quite enough. When we first started looking at the problem, we had a very large data set, like 300 gigabytes. And nobody in the whole cohort was like, oh, yeah, let's do this. But like I said, I'm a competitive guy. I said, we can do this. I knew about Katana, so I kind of downloaded the whole data set onto Katana. And we split it up into training, validation, and testing. And well, I tried to use Unet. I really did. I tried to do it myself. But the thing is, there's so many different hyperparameters. It's a CT scan. It's a computed tomography scan. It's 3D. Like, how do you know how to adjust? What kind of preprocessing steps would you use? It's really hard to understand that. And so there was this guy. He wrote his doctoral thesis on N-Unet. And I read chunks of his thesis. And I looked at the Git project. And basically, the whole medical semantic segmentation, I want to say cohort, but you know what I mean. All of those people, they kind of really grew around N-Unet. And I was skeptical at first because I'm like, well, I'm not actually doing anything novel here. I'm not manually. There's no pride in this. But actually getting it to work was kind of hard. And eventually, I thugged through the SSH, like Katana portal and making the Jupyter Notebooks. And N-Unet automatically learns the hyperparameters. And configuring the pip list so there's no conflicts, accelerating it with PyTorch, that turned out to be a pretty big problem. And anyway, so at the end, I was really worried. But we did the final presentation. And one of the markers at the back was like, yeah, my colleagues couldn't implement this. Yeah, we got full marks. We obtained a very competitive score using exhausting UNSW's GPU compute. But beyond that, I wasn't satisfied. I wasn't satisfied with the score that I got in the end. Because come on, you run a kidney segmentation thing and you know with, I don't know, say like a 70% accuracy. That's not good enough. Plus, I've got access to state-of-the-art hardware at the moment. I should be able to do better. So what were your evaluation metrics, your validation strat, how did you avoid overfitting? Oh yeah, more data. More data, that's a pretty big one. I can tell you the usual answer, like batch normalization, regularization. I can't remember exactly. Maybe there was a JSON file where somewhere I tweaked the amount of dropout. But naturally, more data, dropout, regularization. What else helps with overfitting? I think investigating the bias variance decomposition helps with overfitting as well. Although that might be a little bit theoretical. Yeah, fair enough. I really do like the bias variance composition. I think it's one of the most beautiful aspects of machine learning. It's so easy to understand as well. Do you reckon? Okay, well good for you bro. In the context of you can show the trade-off and it's very palatable to people. Cool, and then pick one of your web apps. For example, the medical software web app or your math tools and go API project. How did you take it from a prototype to something that was deployed with an API and frontend that real users can hit? Yeah, so the thing is the medical practice web app, that is like a full stack web app. The math map is a static app. So it doesn't really have a huge backend beyond just a JSON file. Have you opened it up by the way? Not yet. I think you'd like it. Because I got into grad school and I spent the whole time just thinking about all the different hierarchies of math. It's a nice visualization. Again, I didn't write a lot of it, but it's nice to use. I actually did write the medical practice web app. So I made the backend in Flask. And we watched a bunch of YouTube videos to kind of understand how to connect the Python Flask components to the React frontend. I just think using like RESTful APIs and just using like, I think there's a special way to use SQL queries as well. I forget what it's called, but do you know what I'm talking about? A special way to use SQL queries? Yeah, like where you're not like writing the actual queries themselves in Python. I forget. But yeah, so I think, yeah, like design architecture kind of, because there were like two people on the backend, myself and someone else. And then there were three people on the frontend. And so just coming up with like a kind of a homogenous API that we both agreed, like both parties agreed would be the best way to use it. Yeah. Just passing over JSON. Yeah. Yeah, cool. Cool. And so what about sort of the operational elements of deploying an application? Like how do you do logging, monitoring? How would you debug something if it broke? Yeah, yeah, yeah. Yeah. I would use Flask. Right. Yeah, I would actually use whatever Flask had to offer. Yeah. To debug what was going on. I think if I could just kind of wrangle the standard out into a file, that would be good enough for my intents and purposes. Yeah. But this is not a project where like we kind of maintained it for the client after deploying it. Yeah. It's like less logging kind of stuff. Yeah, yeah. I do a lot of logging on my own web server. Yeah. So the QR code on the back of that takes you to the website. Did you get a chance to look at the website? No, not yet. Okay. So that website is like it's like a thousand Git commits or something. It's like my opus. Yeah, yeah, yeah. It's like my Wikipedia as well. So everything I know really ends up over there. All of this stuff is tangled up in the website. And so I do a lot of logging for that. You know, I'll check out my D message. I'll check out my system CTL stuff. Although those aren't like web apps, but you know, Nginx, I suppose. Yeah. My mail server, which runs in a Docker container. Yeah. So there's a lot of logging that I do kind of in the Linux ecosystem. Yeah. Because that's what I regularly use. You know, I hand this out to people. I got to check the logs. I got to ban IP addresses that are like, you know, just trying to log in for no reason. Yeah, yeah, yeah. Cool. And so how did you kind of, what kind of tests did you sort of build into the system? Just coverage, man. Yeah. So we, that's probably not the best way to do things. Yeah. But it's like, so we built the functionality and then because of trying to build a certain functionality, you write the code, right? And then, well, you just can't even remember what it is in Python, like PyTest? PyTest, yeah. PyTest. In Java, we use JUnit. Yeah. And then just kind of trying to get like, I think, an 80% coverage and then testing the main functionalities as well. Yeah. It's pretty good for like regression testing. Yeah. In the sense that like, actually it's regression testing. But just like when you add more functionality, it's like you can just run the test suite. Yeah. I strongly am an advocate for testing. Oh yeah, yeah. I think it's something that's overlooked in software development. I'm a huge fan. But yeah, like it's quite critical for any sort of solution. Cool. I think there's also really cool stuff you can do within testing. Like if you actually study the testing frameworks, like there are really nice design patterns in there. Like decorator patterns and like, you know, like, because I always thought testing of like a kind of a second class citizen. It's not. Yeah. It really should be promoted to a first class citizen in the sense that you treat it with equal respect to your code base. Of course, yeah. Yeah. I think that's something like we're trying to sort of implement more unit tests, integration tests, all those kinds of sort of things that help you kind of understand and debug a lot quicker. Actually just adding to the list of things I'd like to learn or get better at testing. Yeah. Yeah, for sure. I'd like to get better at testing. Cool. Cool. I think those were all the questions where you wanted to run through. Did you have any questions for us at all? I did, of course. Just maybe one or two from you. Because you've been here for almost nine years. Yeah, yeah. Did you have any questions for us at all? So yeah, did you have any questions for us at all? I did, of course. Just maybe one or two from you. Because you've been here for almost nine years. Yeah, yeah. So yeah. Ah, yes. How would you compare the kind of culture here from places you've previously worked at? It's a very special culture, I think. It's sort of a place where we've got so many industry experts and leaders. And, you know, in a lot of other places, it can be a bit scary to sometimes get advice from them or talk to them. Whereas all the leaders here are just so willing to share their knowledge and to sort of help guide and mentor support junior colleagues, which is lovely. And I think in terms of just the people in general, like everyone's just friendly, welcoming. You know, people can jump into opportunities if you've got a good idea. Like, there's no reason why you can't give it a go and try and sort of implement something like at a business level or a project level, like if you're willing to put the time in and sort of consult with people. So it's quite nice. I've been in a couple of other places before. And yeah, I think just here just feels a little bit more. It's just a safe space to really like sort of jump into. I saw someone come in and you'd be like, morning. Yeah, pretty much. It's good. It seems like a lovely place to work. Yeah, it is. Rafid, what's a recent project that you found exciting or challenging? Yeah, good question. So one of the recent products we've actually kind of gone live with is solving document ingestion. So within Punt, right? Can you please explain what is Punt? So I'll explain it in a way where somebody without an actual background would understand. So when you want to assess the risk of a building, right? There are a lot of things that you need to understand about it, right? You need to understand factors, when it was built, right? What the building material is, where it's located, right? And so not only that, you want to understand like, oh, am I insuring things around the building as well? So that if there was a fire and it spread, how much would that affect my bottom line, right? So all these different sort of moving parts, right? They need to be thought out and they need to be essentially quantified because ultimately it goes towards calculating a premium that you charge to the party that's being insured, right? So that whole art there, that's underwriting. And all of that... That's underwriting. Underwriting, yeah. So historically, it's been a very decentralized process. You'd have one team that does the building sort of surveys and evaluation, you have another team that does the pricing and all that kind of stuff, right? But we've built a system which unifies everything, right? And that's really powerful because if somebody comes and says like, oh, I want to insure my building, how long will it take for you to generate a quote? You want to be as fast as possible, right? You want to be able to give them a price and the documentation and everything associated with that as quickly as possible so that they can... And at the same time, you want to make sure that what you're giving them is quite reliable, right? And so that increases your business, that increases your income, revenue, and all that kind of stuff, right? So that's what Pund does. It centralizes that whole underwriting process. But obviously, there are pain points with this. One of the inefficiencies is document entry. So reading in unstructured data and then pulling out information from those different file formats and all that kind of stuff. So what we've built using Gen AI, the Gemini API, is this document ingestion solution, which you just parse in code slips, claims history, all these different types of documents, and it just gives you the main sort of data points you need to feed into downstream processes, right? And so that was challenging because obviously we had to kind of think about the variety of different files that we were getting. We'd have to understand the business rules that were being applied to these documents by a human being and sort of codify that. And we needed to, I guess, balance between what is practical with AI versus over-promising and saying, this is magic, it'll do exactly what you want it to do, right? So finding that fine balance was quite important. But yeah, this was challenging in the sense of like navigating that sort of ecosystem and also kind of communicating the results back to the client because they're quite non-technical and a lot of this is kind of a black box for them and being like, this is the underlying behavior that's happening in the back end and kind of building trust with them through that process. Yeah. So just PDF files output? So the input is a PDF file, the output is like a JSON file and then that's kind of mapped in our sort of back-end systems to the main sort of inputs we're expecting. Sure, sure. Do you use much LaTeX? No, no. You know what it is, right? Yeah, yeah, yeah. I'm a big LaTeX fanatic. So every year, this is like a side note, so every year on my birthday, I release a set of birthday problems. So this was last year's. You can just like click through it now. So it's just mathematical problems that are collected over the year. So this one's the kernel trick. You know the support vector machine? I learned it last year and I was like enthralled by it. Yeah. Wow, this is... So there's like prize money at the end. Oh, wow. Usually my brother wins, so this year I've adjusted the rules. So you have to at least pass. You have to at least pass. Because otherwise the monkey just... Because nobody does it, right? Yeah. It's just too hard. But it's okay. Over the years I've found more and more intelligent friends and they rob me of my money. But it's good. I think... Yeah. I was going to ask, what does progression look like for someone starting as an analyst? Yeah. It all sort of depends on... It depends on really the role and what you really get involved in and everything. So we've got some analysts who go down the actuarial pathway. So they might take their actuarial exams and sort of continue over with those for about three, four years sometimes, depending on when they've started as well. So you might start off early in your career as an analyst of doing things like your data analysis, like updating models, like doing a lot of the data cleansing, things like that. And then you might progress to checking other people's work. And then from there you might progress further to making selections and things like that or providing advice or providing suggestions and stuff. So yeah, that's just a very, very, very basic way of explaining it. But yeah, so it's a bit of a journey and every year is always different. So you'll find you'll be on a project and each year or each time you sort of work on it, your role will sort of progress and change. Because I noticed here that it said we are looking for a curious and motivated. Are you just looking for one? In terms of this particular role? Yeah, I mean, that's just the list. I'm not too sure how many we're recruiting at the moment. I think, yeah. It's very much a growing space. So we are sort of having, yeah, it's really sort of dependent on who we see and yeah. We have multiple stellar candidates. Yeah, I'm sure you do, yeah. But then if not, then we can sort of go back to the drawing board and go from there. So yeah. Because it's just about finding the right fit. You know, you guys have a beautiful culture and it seems like you're working on real problems and you need the right people to help solve those problems. Yeah, that's right. Awesome. Cool. How do we go for time? Well, we're at 10 minutes. Not bad. Not bad. Perfect. Okay. Well, look, thank you so much for coming in and sharing all of that as well. Sorry, I've been cooped up at home. No, no. I've been employed. Just like, just going slowly insane. No, fair enough. Yeah. It's not the best of us, really. Yeah, especially in this market. Yeah. So I guess in terms of like our next steps, we are going through a few other interviews over the next week or so. So yeah, I should hopefully be able to get back to you by the end of the week. steps or what we might. Yeah. Excellent. Also just one last question. Would you like say that I'd be working with like TypeScript or SQL or Python mainly? Mainly Python. Python, SQL. It really depends like we haven't really sort of like there are a couple of active projects but then if your skill set is more aligned to front end development then yeah like I don't think we use React, we use Angular for that. I only use like any kind of front end libraries because I want to see something. Yeah. So I wouldn't consider myself good at it. It's just the large language model makes it so easy these days to get something that you can see. But I think the bug fixing and the actual algorithm design that's probably what I'm better at. Yeah, cool, cool. So yeah, to answer your question mainly Python. Cool, cool. Excellent. Awesome. Great. Cool. It was great meeting you. Yeah, good to meet you. Do you want me to just turn this one off? Oh, that's fine. Is that okay? Yeah. I can do that when I come back in. Excellent. Good, good. Cool. I've got to go to another meeting. Yes. Nice meeting you. Nice meeting you. Is this all starting next year? Yeah, we wouldn't be starting anyway. No, good, good, good. I'm glad. I'm glad. Thanks for your time. I will be in touch a bit later in the week. Have a good week. Alrighty, fine. Car D. Car D. Level 10. Going down. Car D. Coming down as well for my next one. I've got an event today. A couple of people are joining online, so prepping for leftover food. Lovely. Thanks, Lado. First of all. Car D. Level 5. Car D. 9180 words, 49,000 characters.