AI and Workplace Culture 1:2

Episode 33 November 04, 2025 00:52:11
AI and Workplace Culture 1:2
Cultures From Hell
AI and Workplace Culture 1:2

Nov 04 2025 | 00:52:11

/

Hosted By

Paulina von Mirbach-Benz Lars Nielsen

Show Notes

In this episode of Cultures from Hell, Lars Nielsen and Paulina engage with Belal Gouda, a senior product manager at an AI-first company, to explore the impact of AI on workplace culture. They discuss the balance between leveraging AI for productivity and maintaining human creativity, the pressures employees face in an AI-driven environment, and the importance of leadership in navigating these changes. The conversation highlights the potential pitfalls of AI adoption, including burnout and disengagement, while also emphasizing the need for a supportive and human-centric approach to technology in the workplace.

 

Culture Code Foundation https://www.culturecodefoundation.com/

Paulina on LinkedIn https://www.linkedin.com/in/ccf-paulina-von-mirbach-benz/

Paulina on Instagram https://www.instagram.com/sceptical_paulina/ 

Lars on LinkedIn https://www.linkedin.com/in/larsnielsenorg/

Lars on Instagram https://www.instagram.com/larsnielsen_cph/

Belal on LinkedIn https://www.linkedin.com/in/bgooda/

 

Takeaways

AI can enhance productivity but may also create pressure.

Employees may feel less valued in an AI-first culture.

Leadership must set clear expectations for AI use.

Human creativity is irreplaceable by AI.

Burnout is often a management issue, not just an individual one.

High performance should focus on quality, not just output.

Training is essential for effective AI integration.

Communication among team members is crucial in an AI environment.

AI should be a tool for empowerment, not a source of fear.

Maintaining human connections is vital for workplace culture.

 

Chapters

00:00 Introduction to AI and Workplace Culture

01:36 Belal's Journey in AI First Company

04:41 Daily Life in an AI First Environment

10:20 Pressure and Expectations in AI Adoption

12:19 Humanity vs. Machine: The Cultural Shift

14:46 The Shift from Productivity to Output

20:58 Creativity and Motivation in AI Workplaces

24:59 Leadership's Role in AI Integration

27:33 Employee Turnover and AI Impact

28:38 Burnout: An AI or Management Problem?

28:46 Understanding Burnout in the Age of AI

31:50 The Management Problem: High Performance vs. High Pressure

36:15 Neuroscience of Pressure: The Impact on Creativity

38:32 Warning Signs of a Toxic Work Environment

42:02 Creating a Culture of Trust and Clarity

47:40 Leveraging AI Effectively: Human Connection Matters

View Full Transcript

Episode Transcript

Belal Gouda (00:01.277) Thank Lars Nielsen (00:01.282) Hello everybody and welcome back to Cultures from Hell. This is the podcast where we explore the messy, fascinating and sometimes frightening sides of modern workplace culture. And today we're diving into a topic that's shaping every team and every business, AI and workplace culture. What happens when companies push employers to use AI for everything? could happen in my company. Does it make work better, faster, smarter, or just more pressured and soulless? And to unpack this, I'm joined by two people who've seen both sides of this shift, Paulina, co-host and co-founder of the Culture Code Foundation. And today we are welcoming Belal Gouda, a senior product manager with first-hand experience. inside an AI first company culture. Together, we'll explore how AI changes, not just what we do at work, but how it feels to do it. Belal, welcome to the podcast. Belal Gouda (01:15.827) Thank you, Lars. Paulina (01:17.377) Welcome from me as well. Lars Nielsen (01:17.568) And let's start off with your story. Can you tell us a bit about yourself and what makes you our expert on this topic today? Belal Gouda (01:29.747) Of course. So I'm Belal Gouda. I'm a senior product manager at a software company. It's actually also an AI company. And before becoming a product manager, have a long, long career as a technical, in a technical role as well. So I have a technical experience myself, which makes me very fascinated by the capabilities of AI and the prospect of using AI to build things and do things. And I use AI myself every day to make a lot of things, to make myself more productive, and so on. And I think it's actually mostly a positive thing than a negative thing. However, what makes me knowledgeable about the topic of today is that I also happen to work in a company that recently decided to become an AI first company. And that means that they're basically trying to automate everything possible that can be automated with AI. So the rule is if AI can already do this well, so use AI and don't do it yourself anymore. This applies to non-technical tasks, like preparing a slide deck. It also applies to technical tasks like writing code that AI is supposed to be writing very well now. So again, like hearing this and receiving this news at the company was good news, right? So management started to encourage people to use AI, which I think is a good thing. But the problem is that this started to become a lot of pressure to know you must use AI, which is okay. Now we're starting to become a little bit nervous. Okay, like some people were resistant to using AI. they started using a lot of clever techniques like they created the hackathon where people would have to compete only by using AI to build things, which was a fun way to introduce people to what AI can do. And then eventually everyone started adopting AI. But the problem was that then the expectations of how much output they can produce with AI became very different. And so suddenly people were expected to... Paulina (03:46.602) and Belal Gouda (03:50.107) multiply their output by 10. Everything they did in a couple of days, they were told they should have used AI to do it in a couple of hours, which was not very nice. Lars Nielsen (04:04.896) and Paulina (04:05.314) Mm. Lars Nielsen (04:07.672) Pauline, have anything to say? Paulina (04:10.203) I know, I was just pondering about this because I will get to this. It's something that I've seen a lot, but... Lars Nielsen (04:18.602) Okay, to take us back to your company, you're saying like you have an AI first strategy and you mentioned a couple of things that what happens when you have this AI first strategy and people get pushed to use it. But what does that look like in a day-to-day scenario that you work in an AI first company? Belal Gouda (04:42.321) How that works is that we're always asking the question, like, how can I do this with AI? How can I automate this with AI? And it's perfect for routine tasks, things that were time consuming that you had to do that anyone could do, even people who don't have your exact experience, but it was your responsibility to do it, and you had to do it anyways. Now AI can automate that and take that off your shoulder. And it's supposed to free your time to focus on higher value things. But when you're expected to do everything with AI, including the things that you have many years of experience in, then it becomes a little bit tricky because now you're starting to feel that you're not really adding value yourself, that you became the agent, not the AI agent. became the agent and you're just directing the AI. telling it what to do. So for me, every day I use AI to plan my day. I use AI to automate writing, like putting my own thoughts into documents of certain format that I need to share with my team, with customers, with partners and so on. All of this is very useful. But when we get to technical tasks, people building. our product, they also could use AI to automate code writing, but they need to stay in control, right? They need to be the decision makers. And that's usually how it works. Paulina (06:28.8) And I can imagine that they also want to feel valued for their creativity and for the way they do their job and not feeling like a robot feeding a machine. Belal Gouda (06:44.635) Exactly. Lars Nielsen (06:44.79) I think that's, I think that's actually a very good point Paulina that you, kind of from what you're telling Belal and I would imagine that this goes for most AI first companies, that you've kind of taken away the credit for doing a great work from the person to the AI, right? And, and I would say that Paulina, know me, you know, I use AI for almost everything in my life and I run an AI company. Paulina (07:01.282) Mm. Lars Nielsen (07:15.724) But I would be sad if, know, I actually just have, okay, short story. had a delivery yesterday to a client where we developed like this platform for them. Everything was white coated. Everything was used by AI. The client knows this. He knows that we use AI for everything, but the client still said like, I love what you guys did. I love how easy it is to use and how beautiful it was designed. blah, blah, instead of just saying, you AI did a good job. That would have been really sad because I still spent two weeks doing it, Yeah, okay. Belal Gouda (07:55.155) So I have a question about that, Lars. So even before you got praised for your work and the effort you put, before you delivered that, right after you finished the implementation, before you delivered, were you proud of your work? Did you see yourself in your work? Did you have a feeling that this is something that I built, no one else could have used the same Vibe coding tools to build exactly that? Lars Nielsen (07:58.371) Yeah. Lars Nielsen (08:23.148) Yeah, I would say I'm extremely proud of what I did. I'm proud for two reasons. One is I am not a coder. So I'm so proud that I'm able to do stuff like this today without having any coding background. The second thing is that when I deliver something and people are satisfied with what I deliver, I can easily hear that what they notice is how many thoughts that went into doing that product, right? You know, have to button the right places or, okay, you took into consideration that when we do this, this has to happen or this has to be saved or you have to have this whatever in the product, right? And that comes from years of experience that I know that you have to put this into a product. And that's what I get praised for. Belal Gouda (09:18.694) Exactly. Paulina (09:22.562) That's exactly the point. mean, I use AI on a daily basis as well, but definitely on a much lower scale than the two of you do. And still I feel like it's my work product because I, put the thought into the prompt that then generates the results. And then I chat with the AI to make, to improve the results. And then I take the result that the AI produces and then I still rework it to make it fully my own. and to adjust it to the customer. I definitely see it as my work product, not AI's. Lars Nielsen (09:57.098) Exactly. Okay, Belal, you talked a little bit about it. So just getting back on track here. You talked a little bit about like that the team got like pushed to use AI in every possible way. And again, for me, big fan of AI, when you say it, I kind of see it as maybe a positive thing. but I can also see kind of the pressure it puts on the team. So what kind of pressure did that put on the team? Belal Gouda (10:32.197) It put a lot of pressure on the team. There is a difference between encouraging people to use AI, teaching them how to use AI in a productive way, similar to how you were just describing your use of AI. Something that you should still be proud of, something that still delivers your own experience, but in more sophisticated way, in faster way. And just telling people to find figured out, find a way to use AI just for the sake of being much, much faster than what they are now. So because now there's an implicit negative message. You're telling people you're slow. Like given that we have AI now, you as humans became too slow for us. Like we can't just wait for you to do your normal things that we hired you to do. So these people were hired, most of them were hired before. All of that all of these things with AI were possible. And now you're telling them you're not good enough anymore. You need to use AI. AI is going to do your work much faster. So I think that message is very sensitive and it needs to be carefully said and explained. And I think companies, especially those who choose to be AI first, like all companies in the world, think now in the age of AI should have some sort of program to watch out how their cultures are being affected by AI, how their employees are feeling about it, and so on. But especially if you are a company that took the decision to become an AI first company, you need to have some sort of a handbook that tells people how to deal with this to set the right expectations about it. Paulina (12:21.526) Mm-hmm. Lars Nielsen (12:24.6) And then it's such an important topic to talk about because again, I am all in on AI. I'm deep down that rabbit hole and I just see positive things. But this kind of just opens my mind to a lot of new potential pitfalls. Well, what is your thoughts on it, Paulina? Just to get your words on it. Paulina (12:29.772) Yes. Paulina (12:33.844) You Paulina (12:48.556) Well, for me, it feel it fits so well into last week's episode where we talked about digital surveillance and where I had my slightly emotional rant about companies treating humans as machines. if what Belal is telling us right now is the same trend on a different level. And I just find it so difficult to. treat human beings as machines or expecting the same things that you would from machines because it is just not the same thing. And we need to keep this humanity in work relations. We need to be able to see the difference and to treat humans differently than we do treat at all. And if we don't do this, I just see so major problems for society, for mental health, for connection among people. And I see the possibility of this undermining so, so many things that we've built over thousands of years as a human race. that gives me a lot of, that gives me the chills, honestly, because I don't, I love AI. think it's a powerful tool, but it's nothing else but a tool. And if we forget that, and if we if we start treating humans like machines. I simply cannot foresee a beautiful future with that in mind. Lars Nielsen (14:40.864) And Belal, back to you, like you talked about, like at first AI made your work easier, right? You had time to think, collaborate and focus on quality. But then suddenly there was like a shift where it became more about the output, right? Can you kind of walk us through how that shift felt for you and other people as well, if you have that insight? Belal Gouda (15:09.095) Yeah, yeah. it was like there was this moment of realization that like, what are we doing now? Because the purpose of becoming more productive with using AI is to save time to put our years of experience into higher values, deliverable and more impactful outcomes that we have time to think and reflect on what we're building. We have time to talk, right? Because one of the down sides of AI, even if not very related to the topic we're talking about, something because everyone now can ask Chad GPT about anything. So you'll notice in any company that the number of Slack messages that people used to send, like, does anyone know how to do this? Or seeking help from each other's colleagues. it dramatically went down. So there's less human communication and connections between people who work in the same company. So everyone just like watching their own screen. And if they need something, they talk to Chagypti instead of the person sitting next to them, if they're in office or instead of slacking someone else. So the level of human communication is already being reduced a lot. there's almost nothing we can do about it right now. We can tell people stop using Chagypti. But when even the task that you're focusing on and you're using AI to build it, this was supposed to be done quickly so that you get to think. But now you're being told, oh, since you can do this quickly, we need 100 of that. Why did you only do this yesterday? Because I was going to take all day to do it myself. AI helped me do it in half a day. So I spend the other half in a planning meeting or in this workshop, brainstorming workshop, or doing this research about this thing that we could build or something like that. If you're told, we just care about delivering more things faster. And since you're using AI, you're expected to deliver this much. Now you don't have time to think. So you're reflecting what Paulina was saying. Belal Gouda (17:32.443) losing your human value because AI is not creative, right? Humans are creative, at least until today, instead of the art AI, you're not creative, just like a really smart parrot and smart in the sense of that it can remember a ton of things. But yeah, so that becomes a problem because then I'm just not thinking at all, not thinking in the tasks themselves because AI is just, I'm just giving instructions quickly. and just producing a lot of things without thinking about them. So it doesn't guarantee that you're building, like speaking as a product manager now, doesn't guarantee that you're building the right product at all. It actually almost guarantees that you're building the wrong product because if you're not thinking about the impact of what you're building has on your product roadmap and your customers and your business, then you've just like... you're driving a very fast car in the wrong direction, you're not going to get your destination. So actually, you're going to get to the wrong destination faster, which is a very bad thing. So yeah, that becomes very problematic. Paulina (18:38.946) You Paulina (18:44.428) And I would also say, I love your pictures with a car driving in the wrong direction and the smart parrot. Belal Gouda (18:51.667) Thank Paulina (18:56.726) The quality of the AI output always depends on the input. And if you don't really have the time to think through what you're going to do and to really challenge the AI, because we all know that AI hallucinates. We all know that it is, it produces probabilities. It doesn't produce truth. So as humans with our experience, we need to challenge the output constantly in order to ensure that it's actually of high value. Right. And if you don't have the time to do so anymore, or if you, you're even actively discouraged to do so. It also, just like you say, that is, that has the high risk of running in the wrong direction. And that leaves out the entire human side of it completely what I'm saying right now, but, even just from the productivity standpoint or the quality standpoint, this is problematic. Belal Gouda (20:00.691) Exactly, exactly. And this happens sometimes without even people realizing it because just when, you're pushed to produce more things with AI, you will be less likely willing to be iterating over and over with the AI on every task because you want to produce more tasks. Instead of iterating like five or six times, which is... sometimes what you actually need to do to get the output that you should be producing, you just iterate twice and think, say, it's probably not gonna change much. I need to jump to the next task because I'm expected to deliver this much. Paulina (20:36.78) Mm-hmm. Yep. Lars Nielsen (20:40.63) And Belal, talked about like, and I love when you said that AI or artificial intelligence is not creative. We are the creative ones. So we have to like provide that. Sorry. You have to provide that to the AI for that to want to be creative. And when we have to mass produce, the creative kind of starts to fade, right? And as an engineer, I would imagine that kind of affects on how proud you are of the work you do if you just have to mass produce that everything. Can you kind of tell us, does that affect your motivation or your team's motivation when you have to like mass produce and creative have to take a backseat on things? Belal Gouda (21:29.639) Yes, yes, you can see it in how people talk about their work. You can see it in their eyes and their faces. You can see that they used to be more proud of what they're doing, of what they're building. And don't get me wrong, I sometimes would still see people proud of their work. And even though they used AI, because for this specific task, you could see the difference, because you could see that this is, no one else could have created this with AI or any other tool. So they're really happy about it and they should. But in many other tasks, you would see that, yeah, yeah, we got this done. Like it didn't take any time. It just told Cursor to build it and it did. Here you go. It's working. What should I work on next? So this is not what you want to hear from someone who completed the task. That is supposed to be a great feature for our customers, if it was. So that's a really big difference. Paulina (22:30.912) I mean, you got, you, you guys are some of the best educated people in the world. And this way of handling it sounds, sounds when I listened to you, it sounds like you guys feel like you're producing in the line work, like in the automotive industry, right? Just doing the same motions over and over again and not making any use of all this education, all this experience that you've. built over years and years. It makes me really sad to hear it. Belal Gouda (23:06.013) Exactly. It's like what is happening is very similar to what happened. I'm also a little bit of a history geek. So reading about the Industrial Revolution, for example, where craftsmen who used to build an entire product that represented their art and their skill were forced to make a living in the new world and then to move to factories and work in an assembly line where they just keep adding just one part, like not really doing much. But that was... that factory is producing way more than what they each individual combined used to produce. But of course, the psychological effect that this had on people back then and a lot of research that psychologists did back then like on how the society was changing and so on, that I think if we go back and read it would help us in dealing with today's because it's a very analogous to what is happening today. 100 % agree. And also there's the scare factor that is unfortunately also keeping people in line because you keep hearing about AI will replace all the jobs. All of these layoffs are happening because AI can now do everything. So you kind of feel, okay, so I better keep my mouth shut and use AI as they tell me and produce as much as... Otherwise, AI will take my job, right? So, which is again, what happened to back then in the industrial revolution. That's exactly the same in that, if I don't stand in this production line and do like everyone else, I'm going to start to that. So that was the sentiment back then. And now it's like, I'm going to be jobless if I don't comply with these changes that rob me of my... using my own experience. Lars Nielsen (25:04.342) And speaking of that, Belal, is that when you have to like, or somebody has to kind of, I want to find the right analogy here. So somebody has to steer this ship in the right direction, right? Because at one side you have to embrace AI, but on the other hand, you also have to have the human, human in the loop, as a lot of people say, right? So what does this mean for leadership? when AI becomes like a big part of your everyday life, right? Belal Gouda (25:41.103) It means that you should treat it like any other company process and give it enough thought and not just jump on the track and set the right policies and culture for it. Similar to, for example, like if a company decides to become fully remote instead of, to become hybrid instead of ops, they start asking a lot of questions about how we should do it right. How do we enable people to be more productive at home? Like do they... Do we need support them with equipment? Do we need to set like a different meeting policy for online meetings or different workshop tools for online workshop? Like you start investing in your decision to get the most out of it, the most value of outcomes, not the most as in the number of outputs because at the end you want your business to be successful. You want your product to be good. So similar to any kind of high level decision that leadership usually makes, this needs a lot of thought as well. This is all what we need to do to actually only get the positives, hopefully only get the positives of AI instead of the negatives. We need to start asking questions like what kind of tasks should people use for AI and what kind of quality do we expect from these tasks? How do we make sure that people are satisfied? we need to set up set up a cadence to make sure that people are like check ends with people to see how they're feeling about using AI, how they're feeling they're more productive. Do we need to adjust our metrics to really measure if we're being more truly more productive with AI or not? All of these things, I think, must be set in place. There should be like a culture document or if you already have like a culture deck. part of it needs to be updated about how to use AI. But this is probably the part I'm not gonna... You know how to do this very well, so... Paulina (27:40.578) Hmm. Lars Nielsen (27:50.67) And this might be a, I would say, maybe a little bit provocative question. Do you know if the company is kind of losing some of its best people because of all of this AI and so on? Belal Gouda (28:13.245) So I think it's more likely that this is the case. basically, I would say that recently the turnover had been higher than usual. I haven't talked to a certain person who said, I'm leaving because this push of AI and so on. But they say, we're leaving because we're not happy anymore. found better opportunities and so on. But you can see that there is a correlation between that turnover and how things are starting to take shape in how people are doing their daily work. But it could also be one factor among other factors for those who left, of course. It's never one factor that becomes the deciding one. Lars Nielsen (28:55.406) Yeah, and then speaking of that, actually have a question for both of you and I will let Paulina go first to answer this one. So Paulina, we have talked a lot about like burnout pressure and so on here on the show, right? And burnout and pressure in this context here, is that an AI problem or is it a management problem using AI as an excuse? A little bit of a leading question. Paulina (29:19.874) I think I've said this before. Lars Nielsen (29:30.613) Ahem. Paulina (29:31.206) Usually burnout is not an individual problem, but it's a root cause that lies within the culture or within how things are handled or how people are being led in a company. And I can imagine that the AI side of things exacerbates burnout or pressure because of the things that Belal mentioned, right? That you feel that you are afraid that AI might take your job if you don't comply with it. And this overall anxiety. I think very, very many people feel that overall anxiety at the moment, or this, this vague fear. of what the future holds because we cannot really foresee what AI is going to do to our jobs, to our reality. And for me personally, hearing you Belal talking about how... people feel less valued, how people get pushed out, how people really lose the human connection. didn't know, for example, the statistic that you mentioned that's that the Slack messages in the internal Slack messages went down so dramatically, which can be a good thing too, right? But if it's a sign of people not collaborating anymore, not talking to each other anymore. And if all of that pushes us into this direction of being more robotic or more inhumane, then this definitely will amplify burnout because the human resilience comes from real true connection to other people. And if our job or jobs become more and more disconnected and actually drive us to behave more like technology, then this will Paulina (31:40.107) skyrocket mental health problems across the world. And we already have a major mental health crisis. So I think this is, this is something that every single entrepreneur out there should be extremely aware of. Yes, you can make more money if you have less, if you have to, if you have less people and you have better margins at the end of the day, because you can, you can produce all of that. just using technology. But what happens to our society, what happens to our world, if we are all interchangeable, disconnected, burned out, and lonely, lonely people are there, what is gonna happen to this world? So, yeah, yeah, would just sum it up to come back from my rent. Lars Nielsen (32:38.207) Hahaha Paulina (32:39.518) management problem from my perspective. Lars Nielsen (32:42.766) Pelle, any thoughts on the same question? Belal Gouda (32:47.03) 100%, I think 100 % it's a management problem. AI is just a tool like any tool. We need to use it wisely. I use it all the time. I don't want to sound that I'm against AI. actually, as I said, I have a technical background. I'm an AI engineer. I understand how the technology works and I love to use it to become more productive. I use it for personal things. I use it to be better at what I do at work. I just hate to see it being put to the wrong use and causing so much negative effects, especially when it affects people. As Pauline said, mentally, this is like where I draw the line. I know there's not something that needs to be done about this. Lars Nielsen (33:35.98) And Pauline, you said something powerful in one of our last episodes, that companies today confuse high performance with high pressure. Can you explain what that means in the context of today's episode? Paulina (33:53.643) Yes, absolutely. So high performance is always about sustainable excellence, which should be driven by clarity, by mastery, by creativity and by people's motivation. And high pressure on the other hand, usually is some form of panic from management. And what Belal told us is something that I've seen in multiple companies. So a company will introduce AI and then expect everyone to suddenly become productivity superheroes. And then the companies do mistake frantic output, put over-effectiveness. And that is when I understood you correctly, Belal, exactly what you were saying, right? The only KPI is, you do it faster? Or even more micromanagement phrased. Why didn't you do it faster? Right. So it's not about if it's better or if it's even necessary. Um, it is just about doing more and more and more and more and more, no matter if that brings any actual value. So, and I've said this before, I'm convinced that pressure might get you short-term compliance, but it definitely never builds long-term capability. And especially when I see a lot of companies doing, mean, Belal, you told about your company is that they invested into training people, into getting them into really getting them in touch with AI and showing them the capabilities. But a lot of companies don't even do that. They just say, okay, we're AI first. You go figure it out all by yourself. Right. And, that just exacerbates that, that pressure, I would say. Lars Nielsen (35:49.858) Can I just say one thing here again? One of the things that I work with in my company is we also go out and kind of advise people on how to use AI in general, don't just deliver products. And one of the things that we see so, so many times when we come out and we say like, so did you implement AI? yes, we give a chat GPT to any employee. And it was like, okay, so, and how do they use it? we don't know. We just gave them an account and asked them to start using AI. And I was like, okay. And honestly, I would say that is eight out of 10 companies I meet that said they implemented AI. That's how it works. And like you say, Paulina, it creates so much more pressure because they feel forced to sit and make up. Paulina (36:25.634) Yeah, exactly. Lars Nielsen (36:45.13) stuff and do this in quotation marks for everybody just listening, make up stuff that they can use with AI, right? Because now they have to start using AI. And wow. Paulina (36:56.726) Yeah, can also have the opposite effect, right? If you have no idea how to actually utilize it, that you will actually spend way more time on delivering any output or outcome because you're just lost in the technology of it. Lars Nielsen (37:11.458) Yes, 100%. And Paulina, from a neuroscience perspective, what actually happens when people work under this constant pressure or fear, even if it's meant to kind of, again, quotation marks, motivate them? Paulina (37:32.565) I hate, I hate this phrase using fear or pressure to motivate people. It never motivates people. It just pressures them to comply. It's just, it doesn't work this way. And if, if the brain is under, under threat or which pressure and fear constitute on a neurobiological level as fear, and then the brain does something really clever. Lars Nielsen (37:40.055) What? Paulina (38:02.056) and really inconvenient because it shuts down the prefrontal cortex. And that is exactly the part that we need for creativity, reflection and decision-making. So while AI might actually speed up our tools, the insane pressure to work with AI might slow down our brain and thereby reduce the quality of output that we actually deliver. And then you end up with humans that are running prompts while on cortisol autopilot. That's not innovation. That's survival mode. And I think I speak about survival mode quite a lot in this podcast. If humans are in survival mode, they will never deliver their best performance. So companies are actually complicit. companies that act like this, they are complicit in worse results than they intend to produce. Lars Nielsen (39:11.232) And then something we often hear is that pressure creates diamonds. But at the same time, we can also kind of break people, right? So what are some warning signs that the leaders out there should be looking out for? And maybe after you answer this, Pauline Bellale, you can maybe also pitch in on this one. Paulina (39:34.802) let me just really clear up this picture of pressure creating diamonds, right? Yes. Pressure does create diamonds, but it takes geological ages. It doesn't do it in Q3 to create diamonds. Right? So stop, stop using this, this phrase for everyone out there because it's just, it's just bonkers. And when it comes to warning signs, I would say. the usual warning signs that you see when people are not engaged anymore, when they are not speaking up, if they are busy and back to back meetings, if they ship outputs fast, but show no pride in them, exactly what, Belal mentioned before. And also when energy is spent on avoiding blame, really creating value. I've been working with a company where where, where people spend up to six, seven hours every week just to document what they had been doing in order to be able to, to showcase that they were not to blame for anything. So obviously if you do that, if you spend six or seven hours every week on doing that, you're not spending that time on actually creating value. So, um, yeah, I would say those are my warning signs. Belal, do you have anything? else. Belal Gouda (41:05.489) Yeah, I would say another warning sign is when people start looking or watching outputs instead of outcomes. So reflecting back on the pressure part, so pressure is bad in any way or form, but it's even worse when you're pressuring people to be faster just for the sake of being faster. Like when you just want them to produce more, like how many features did you ship this week? So I'm not saying that Paulina (41:25.397) Yes. Belal Gouda (41:33.553) Like if, even if you're pushing for, for outcomes, right? Like you must improve that metric. must increase customer retention this quarter, figure out a way to do it. And like, if you do it, if you, if you put pressure on people to do the right thing, it's still pressure and still bad. And there are still better ways to achieve these good things, but it's even worse when you put pressure to push people to do something that is not guaranteed to provide value. It's just an output. You don't know. if it's going to be something good or bad for your product, if people are going to use this feature or not, if it's actually going to prove that metric that you care about or not. So that's even for me a bigger warning sign when you start, because I think every culture that survives on pressure eventually falls into this trap because people unconsciously became, they are aware that outputs are proxymetric. for the outcomes, right? So they start focusing on these proxymetrics because they're too pressured, too afraid, and they need to see a result sooner. And the result that they can feel is how much they're producing. So they start caring more about that. it becomes, it's gradually, even if you're starting from a good place where you're well-defining your desired outcomes, you eventually fall into this bad place where you're caring more about outputs because... because of that pressure. So I think that's one of the biggest problems in science. Paulina (43:06.316) That's a really good point. Lars Nielsen (43:08.64) And Perlina, you've worked with teams through the College of Code Foundation that perform brilliantly without pressure, So what's the secret source? What's the secret to that kind of motivation? Paulina (43:25.396) I happen to know for a fact that all three of us used to work for companies that could perform without pressure. So, I think you also have to have ideas on that. From my perspective, what I see in teams like that is absolute clarity on what good work looks like. That it goes to the point that you made Belal about expectation setting. then it comes down like. Lars Nielsen (43:33.0) yeah. Paulina (43:53.461) All the time in this podcast, can come down to trust. And in this case, it's the trust that you will be backed and not blamed. And last but not least, really, really important is constructive conflict and the permission to challenge whatever you're doing and also what others are doing. Right? So you challenge across the board and because you have the trust part in it, then being challenged also doesn't feel threatening. It's just something that, know, okay, we're all working together to make the solution even better. And that's why we are allowed and encouraged to challenge each other and ourselves and to really. translate this into AI enabled environment, would say that can, that should mean to letting people customize the use of AI. Belal, I would be super happy to hear your point of view on this part. I think it also is important to give teams space to explore how to improve their output as a team, not just the speed of things, how fast they can do it. And Then the third point here would be, as I mentioned before, really invest in training and upskilling in regard to how to utilize AI best. And, because we also spoke about the importance of leadership when it comes to navigating the AI topic within a company. it's, we need to bring in constantly. This question, what's the human layer here that AI cannot see? Belal Gouda (45:42.587) I couldn't agree more. Lars Nielsen (45:42.708) and then... Paulina (45:42.784) I love, sorry. Paulina (45:49.836) Villain go ahead, go ahead. Belal Gouda (45:51.763) So I couldn't agree more and I just want to add on the individual level, there are things that people could do to make sure that they're using AI in a better way. of course, always adding in the prompts that they use that asking the AI to challenge them. But I would also add that the internet is full of ready-made prompts that people copy and paste. would say I would advise people to invest in their own AI. making it theirs, making it as descriptive as possible to produce what they would like to produce themselves. I would also encourage people to add in their prompts, to add the instruction, to tell the AI to keep asking them questions. So this encourages this feedback loop between the AI and the human, and it makes the AI extract as much information as possible from you as a human. before producing its outputs. Tell the AI specifically, do not produce the final output before you have every information you need from me. And don't fill any gaps yourself. Because if you do this one shot without that iteration, this round of questions and answers, regardless of how descriptive your initial prompt is, there are going to be some gaps. that the AI is going to assume that it's their task to fill these gaps and assume that you're going to be fine with it. So don't let the AI do that assumption by adding this question answering part before producing the final output. Another thing that is also more of a process thing, if you're working within a team, you're doing something using AI for work, make sure to get human feedback before sharing the final output. So this is for engineers, this is very common PR reviews. So you show people your code, know that a human is going to review that code and assume that you wrote it, whether with AI or not, it's your responsibility, that's your quality. So the feeling that a human is going to review what I did is always a motivation to get the... Paulina (47:53.25) you Belal Gouda (48:17.619) produce something that represents you. But I would say even if you're producing a slide deck, have a colleague take a look, hey, I just finished the slide deck, I need your feedback before I use it in my next sales meeting, for example, or my next customer meeting. Adding that human factor, like counteracts the AI effect of less human communication, also reaffirms... your confidence in what you're producing and makes you more proud of it and so on. So I think this is also a very, very important thing to do when using AI. Paulina (48:56.214) Hmm. Lars Nielsen (48:58.85) Very valid points there. I'm going to put in a marker here. Yes, because of time. I'm going to control the questions from now on. So we are going to go completely up script. We're to use the questions in the script, but I'm going to dictate and you guys just follow along. Is that OK? Paulina (49:05.248) Yeah, I was just about to say, shall we? Paulina (49:18.21) I had another idea because we still have so much to go through. How about we do this? Do we make this into a two-part period episode? And just, and just if that is okay with you, Belal, because then we could just wrap it up for now and continue with the rest of the questions next week. Lars Nielsen (49:21.091) Mm-hmm. Lars Nielsen (49:27.596) Ooh, is that okay with you, Belal? Lars Nielsen (49:39.351) Is that okay, Palau? Belal Gouda (49:39.507) Okay, sure. Yeah, yeah, sure, sure. How many questions do we have left? Like how far are we in the script? halfway. So we do have content to fill the next, okay. Okay, okay, yeah, let's do that. Lars Nielsen (49:45.233) We are only halfway. Paulina (49:45.27) We're halfway through. Lars Nielsen (49:49.92) Yes, we have lots of content. Paulina (49:50.071) Yes. Lars Nielsen (49:54.848) Okay, are we ready? Paulina (49:57.804) Yes. Lars Nielsen (50:00.482) Thanks for that insights, Belal. And Pauline, we have done this so many times. When we dive into an interesting topic, Then time just runs past and yeah. So we actually decided to go to do this as a two-part episode because there's still so much to cover when it comes to like AI and workplace culture, right? Paulina (50:10.114) You Paulina (50:25.612) Yes. I was, I was, I was already thinking that this might be too big a topic for one episode, but, and now with, with us just getting into, into so many details here, it's, and it's so interesting. And I think a lot of listeners out there, are really, are really interested in this topic as well. So I really don't want to cut the short. So that's why the three of us thought, okay, let's make this a two-part series. Lars Nielsen (50:27.896) Lars Nielsen (50:55.244) And again, like I'm, I love this because like, again, I'm so deep down that rabbit hole of AI and I just, whoa, I love it and I praise it and so on. everything you've been covering Belal and thank you for that. It's kind of gotten me to start thinking in a different way, right? I was, I was very surprised as you Paulina with the thing about the Slack messages, but it actually gives meaning because I asked ChatGPT for everything. Paulina (51:02.807) Yeah Lars Nielsen (51:23.392) Instead of asking everybody else, right? Like, you know, how do I solve this or how do I do this? Or can you give a different perspective to this? I need to start asking people more questions instead of just asking Chattjibutee. Paulina (51:23.65) Mm. Paulina (51:38.519) And especially, especially I, I mean, and I, do this all the time. So we work with AI, but we always have the other person look over our work again, to bring in a fresh set of eyes and a very human perspective and not just the AI side of things. So I think it makes our work even more solid to do this, to really ask for feedback and for different perspectives on it. Lars Nielsen (51:39.084) Okay. Lars Nielsen (52:08.974) Cool. And Pauline, before we wrap up for this part of AI in Workplace Culture, please enlighten our listeners where to find you if they want to reach out or have any questions. And please, Belal, afterwards, just share with our listeners where they can reach out to you if they have any questions. Belal Gouda (52:29.226) No Paulina (52:31.148) So you can always find me on LinkedIn and on Instagram. We will put the social media handles on the show notes. And you can also find the culture code at culture code foundation.com, which is my, which is my company. And obviously just, I have so much fun with having a live guest here. Belal thank you so, so much for sharing your story today and next week. And if anyone else out there wants to share a story with us. please feel free to DM either Lars or myself and either be on the show yourself or tell us your story and we will share it in an anonymous way if you prefer that. It is really, really fun having experts on specific topics here and have people tell their own stories. Lars Nielsen (53:22.606) Where can people find you if they want to reach out and ask you any questions around this topic? Belal Gouda (53:28.883) So also LinkedIn, I'm not on any other social media platforms, but I'm very responsive on LinkedIn, so feel free to send me a message if you have any, especially if you any comment about anything that I said in this episode. And finally, thank you Paulina and Lars for hosting me and for giving me the opportunity to speak about this topic, which is a very important topic, like personally for me to speak about. So I really appreciate this opportunity and looking forward to our second half next week. Lars Nielsen (53:59.276) And like you said, Pauline, we're going to put all the handles in the show notes. So if somebody wants to reach out to Belal on this topic, please hook up with him on LinkedIn and reach out. And to all our listeners out there, this has been another exciting episode of Cultures from Hell. I am so much looking forward to the next one next week. Have a great week ahead out there. Paulina (53:59.468) So are we. Paulina (54:27.308) Thank you, Lars, and thank you, Belal. Belal Gouda (54:30.205) Thank

Other Episodes

Episode 22

August 12, 2025 00:42:55
Episode Cover

The Culture of Overwork: Burning Bright or Burning Out?

In this episode, Lars and Paulina delve into the toxic aspects of hustle culture, discussing its impact on mental health and productivity. They share...

Listen

Episode 25

September 02, 2025 00:38:29
Episode Cover

Diversity Burnout: When DEI Becomes Performative

In this episode of Cultures from Hell, Lars Nielsen and Paulina discuss the hidden side of diversity, equity, and inclusion (DEI) work, focusing on...

Listen

Episode 12

May 20, 2025 00:38:47
Episode Cover

Breaking the Cycle: Turning Failure into Opportunity

In this episode of Cultures from Hell, Lars Nielsen and Paulina von Mirbach-Benz discuss the critical importance of learning from failure within organizations. They...

Listen