Artificial Intelligence
AI generated work from managers is damaging trust among employees
Wednesday, March 25, 2026
|
Russ Scritchfield |
More than half of US employees report receiving low quality AI generated work from managers, with most saying it damages trust in leadership and increases skepticism about AI while exposing major gaps in workplace training and standards.
Across industries, employees are noticing a shift in how work is created and presented by their managers. Generative AI tools are now part of the managerial toolkit, shaping emails, presentations, performance notes, and strategic updates. While AI can speed up drafting and analysis, a growing number of team members feel unsettled when work that influences careers, pay, and priorities appears to be created by software without clear disclosure. The outcome is a trust gap that should concern any organization that values accountability and healthy culture.
What The Findings Mean For Employers
Trust is the foundation of effective leadership. Employees expect their managers to bring judgment, context, and accountability to the work that guides teams. When AI shaped content arrives without clarity on what is human created and what is machine generated, that expectation weakens. People start to ask whether the guidance truly reflects a managers understanding. They also question whether their own contributions will be fairly assessed if AI is shaping the criteria, the review, or the story shared with senior leadership.
Why Employees Feel Misled
Three drivers are most common. First is lack of disclosure. If team members learn after the fact that AI wrote a performance summary or a project plan, they can feel deceived even if the material is accurate. Second is quality drift. AI text can sound confident while containing gaps in logic or awareness of local context. Employees who must correct these gaps lose time and patience. Third is fairness. People fear that AI generated materials will substitute for direct observation, leading to decisions that are detached from real work.
The Line Between Assistance And Authorship
AI can be a helpful assistant. It can outline options, summarize long threads, and propose first drafts. But employees draw a line at authorship of materials that carry managerial voice and authority. When a manager uses AI for ideation or structure, then adds judgment, examples, and decisions, the result still feels like leadership. When a manager outsources the narrative and adds only a quick review, it feels like abdication. Stating where the line is for your organization is essential, and then living by it is even more important.
Practical Steps To Rebuild Confidence
Start with plain language disclosure. Give managers a simple formula for noting AI involvement in emails, briefs, and plans. Encourage managers to say what the tool assisted with and to affirm that they have reviewed and take responsibility for the content. Build a flow that requires human verification of facts, names, data, and commitments. Give teams a shared rubric for when AI can be used and when it should not be used, including anything that evaluates performance, pay, or sensitive employee information.
Policy And Governance That Employees Can Believe In
Publish a short policy that employees can read in a few minutes. Explain approved tools, permitted use cases, and red lines such as private data, health information, or legal claims. Describe how prompts and outputs are stored and who can access them. State how the company will audit usage and address issues. Create a channel for employees to ask questions and report concerns without fear of retaliation. Pair the policy with manager training that focuses on real scenarios from your workplace, not abstract examples.
Measuring Trust Without Guesswork
Track trust the way you track revenue or uptime. Use periodic pulse surveys with simple, repeatable questions about clarity, fairness, and confidence in managerial communication. Monitor the rework rate of AI assisted materials by counting corrections, clarifications, or withdrawals. Watch for patterns in grievances, attrition among high performers, and internal referrals. Share the trends with employees and explain the actions you will take. Transparency about measurement builds credibility, especially when you follow through and show progress.
Making Disclosure A Habit
Trust grows when disclosure is routine, not rare. Offer ready to use language that managers can add to messages and documents. Provide templates in collaboration tools that include a short note about AI assistance and human review. Encourage leaders to narrate their process in meetings, such as explaining that a draft began with an AI outline and was then expanded with examples from the teams own work. Over time, this simple practice normalizes responsible use and reduces speculation.
Supporting Managers To Use AI Well
Managers need guidance that goes beyond tool tips. Teach them how to craft prompts that reflect company voice and values, how to evaluate AI outputs for bias or missing context, and how to fact check efficiently. Show how to adapt AI suggestions to the needs of different audiences, from executives to front line teams. Coach managers to prioritize listening, walk the floor, and integrate insights from direct observation. AI can improve speed, but it cannot replace the relationship work that trust requires.
Risks To Brand And Legal Readiness
Poorly governed AI use can expose a company to reputational or legal risk. Confident but inaccurate updates can mislead stakeholders. Unlabeled AI content can prompt public backlash if discovered. Use of sensitive data in prompts can violate privacy commitments. To reduce risk, require source citations in background research, maintain a record of significant AI assisted decisions, and run periodic reviews with legal, privacy, and security partners. When in doubt, choose slower accuracy over faster output.
What Employees Want To Hear
Employees are not rejecting technology. They are asking for clarity, accountability, and respect. They want to know when AI is used, what human oversight looks like, and how their work is evaluated fairly. They want AI to remove low value tasks so managers have more time for coaching and recognition. They also want a voice in how policies evolve. Invite them to contribute to guidelines, help test new workflows, and identify moments where human judgment is essential.
A Simple Path Forward
Set a clear standard for disclosure. Train managers on practical use and common pitfalls. Measure trust and share results. Involve employees in policy updates. Reward leaders who model transparency and sound judgment. These steps do not require large budgets or complex systems. They require consistent behavior and a willingness to explain how work is made. When leaders do that, AI becomes another tool in service of people, not a wedge that separates them.
Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.
MEMBERS GET ACCESS TO
- - Exclusive content from leaders in the industry
- - Q&A articles from industry leaders
- - Tips and tricks from the most successful developers weekly
- - Monthly issues, including all 90+ back-issues since 2012
- - Event discounts and early-bird signups
- - Gain insight from top achievers in the app store
- - Learn what tools to use, what SDK's to use, and more
Subscribe here
