AI isn’t out to take our jobs, but reliance on it will make us lazy: here are the skills leaders should equip their teams with in an AI-led future. Analyst working on team building.

AI isn’t out to take our jobs, but reliance on it will make us lazy: here are the skills leaders should equip their teams with in an AI-led future.

14/5/25

Will AI take my job?

This is a question at the forefront of many minds across multiple industries as AI’s capabilities grow. It is nothing new to say that AI has been transformative for many businesses, supporting with many day to day tasks and freeing up bandwidth for more creative and strategic endeavours. Indeed, Microsoft and LinkedIn’s survey of 31,000 people worldwide found that 84% of respondents said AI allowed them to be more creative in their roles.

While AI is a great support, it’s no silver bullet, so for now, most of us can rest assured that AI isn’t here to take our jobs. However, that doesn’t mean we can or should take our foot off of the gas in terms of our understanding, management and analysis of such tools and their outputs. Doing so would put us at risk of relying on AI outputs as fact rather than leading with critical thinking – a skill that is essential to succeed in an AI-led future.

Within this blog, we’ll explore the skills that leaders should be promoting in their teams to help them succeed in our ‘new {technological} normal’, but first, let us look at why over-reliance on AI may lead to issues.

Why is AI over-reliance risky?

According to a Salesforce study of 4,000 people across various countries, 52% of Gen Z say that they actively trust GenAI to help them make informed decisions. Also known as automation bias, as humans we have a tendency to trust outputs from automated decision makers like AI, even when there is contradictory information to say the results produced may not be entirely accurate.

And unfortunately, technology does sometimes get things wrong. So, while no one is debating the fact that AI is a transformative technology, we shouldn’t simply take its insights as fact.

With critical evaluation and oversight discarded, teams can fall into the trap of a general over-reliance on AI outputs which can lead to complacency a loss of essential skills, such as intuition, critical thinking and problem-solving.

With this in mind, valuable end result management such as data checks should not be thrown out of the window. Instead, leaders should ensure team members continue to make judgement calls when adopting AI tools in their day-to-day. Teams should be asking themselves whether the outputs they are receiving align with past results and their own contextual knowledge of the industry.

Many tasks such as risk management, even if they have been completed by AI, still require a level of human processing and thinking. In this instance, oversight would ensure bias prevention and the prioritisation of ethical considerations. By practicing skills such as questioning, flexibility and critical thinking, teams can combat over-reliance and properly assess the outputs that their tools are producing.

Unchecked results aren’t the only issue over-reliance on AI presents. A recent study published in the Journal of Management found that over-reliance actually risks harming our mental health, too. The Financial Times reports that ‘the use of artificial intelligence has the capacity to boost productivity but the powerful technology also puts workers’ mental health at risk as it erodes traditional ways of collaborating.’

A similar study from the Journal of Applied Psychology shows links between regular AI use and loneliness.

The skills that leaders need to encourage in their teams

To combat these potential risks, leaders should be finding ways for their employees to experiment with and use AI for productivity gains and streamlining routine tasks, whilst also ensuring they create opportunities for teams to collaborate in person and strengthen human connections.

Leaders should also be working with their teams on growing their skills and confidence in other areas required to be successful in a technology-heavy future. In a world of AI, skills such as critical thinking, adaptability, and flexibility are crucial, as AI still can’t perfect these. Leaders should take this opportunity to encourage teams to upskill themselves in these areas – this will not only benefit the business, but will also help to alleviate employees’ potential fears around AI ‘taking their jobs’.

When we recruit analysts at Grayce, we look predominantly for eight ‘soft’ skills: proactivity, accountability, effective communication, stakeholder management, problem solving, commercial awareness, resilience and adaptability. Of course, other skills such as prompt engineering and a general working knowledge of AI help, but these skills can be taught on the job.

Coming equipped with the right soft skills and emotional intelligence, our analysts are able to apply critical thinking to not only understand the tools that they are using, but also mitigate automation bias risks. Ultimately, once employees properly understand how AI tools have arrived at a certain output, it is much easier to identify whether this is the correct course of action to be taken.

Creating full team buy-in for AI tools

While AI understanding and adoption is now vital across many businesses, in order for it to be effective teams need to be fully bought in. The difficulty here can come from a reluctance from employees to integrate these tools into their day-to-day, which usually stems from either a lack of understanding of why the tool is being implemented in the first place; a worry about the prospect of AI taking parts, or all of, their job; or sometimes the ethics of AI.

This concern is prominent amongst younger workers, with Survey Monkey finding 32% of respondents aged 18-24 are worried AI will make their job obsolete.

Our own research found that 41% are worried about the ethics of AI. With this data in mind, leaders need to identify potential sources of resistance and develop strategies to address them. This could be as simple as creating an environment where open dialogue about concerns and opportunities is encouraged.

Leaders can also create buy in through establishing clear company policies and regulations, to promote the responsible and sustainable use of AI, including data ownership. Businesses can drive success through the right talent, too. At Grayce, we provide our clients with digitally native, technologically curious analysts who are well-versed in AI already. This can not only support businesses with the integration of AI programmes, but it can also provide much needed passion and enthusiasm for the technology that naturally drips down into existing teams and removes some of the reluctance.

Proving AI’s ROI starts with a bought-in team

Fostering skills that will protect workers from an over-reliance on AI, while still reaping the benefits of these tools, is not the only challenge for leaders. There’s an increasing pressure to immediately show ROI from AI. This has been a common talking point in the AI conferences we’ve supported recently.

The Microsoft and LinkedIn study that we mentioned earlier backs this up, finding that 79% of leaders agree they need AI to stay competitive, but even 59% of leaders are worrying about providing concrete, successful results.

While buy in and ROI may not seem immediately seem linked, they absolutely are. Ultimately, to create ROI for AI, teams need to feel bought-in enough to use the tools properly and take time to understand them. The benefits need to be demystified and employees need to be educated on the fact AI is there to help them, not take their jobs. By doing this, correct AI usage will ramp up, and in time will come the ROI.

If you’re interested in how Grayce’s digitally-native analysts can support your team, contact us today.



Top Curve
Back to Grayce news
Footer Curve