Linking Behavior to Trust in MedTech: The AI Readiness Check for Leaders
- elizabethwong0
- Oct 9
- 4 min read

In my latest cohort, working with five medtech startups at PolyU, four companies secured their next seed funding. Several moved quickly on next steps, including setting up the company, appointing the right CEO, and making initial moves toward commercialisation. As this sector scales, the need for concrete leadership grows. Leaders establish explicit rules and routines that shape how teams adopt and critically evaluate AI outputs, and they manage emotions to sustain coordination and judgment under time pressure. Winning now means building culture on purpose—not by accident.
Why Culture Determines AI Uptake
Culture can feel abstract, yet it is how people understand the world and predict how the organization will act. A cultural lens explains why similar startups diverge in their adoption of AI. Leader-defined norms and visible signals—such as rituals and language—guide everyday AI-use behaviours. Culture works like a lily pond, according to Edgar Schein: what you see on the surface (artifacts) depends on the deeper root system (history, values, and explanations for “why we do things here”). Leaders want to know their culture to influence behavior. Employees want to know if they will fit. Customers want to know what to expect in every interaction.
Cultural Layers That Fuel Innovation
Least visible: values that support innovation, such as open communication and cooperation.
Middle layer: norms for innovation, such as explicit expectations to share new ideas and approaches to solving problems.
Most visible: artifacts of innovation, such as stories of problem-solving wins and physical setups that enable prototyping and learning.
TRUST as the Adoption Mechanism
Employees won’t trust AI if they don’t trust their leader (De Cremer, 2025). Trust is central and emerges from authenticity, logic, and empathy (Frei & Morriss, 2020). In practice:
Authenticity: say what you stand for and act consistently; narrate trade-offs in plain language.
Logic: show your reasoning and your model’s reasoning; make decision paths visible.
Empathy: acknowledge risks, workload, and face concerns; protect dignity when experiments fail.
Self-determination theory suggests that reciprocal helping and autonomy foster durable motivation—but also create dilemmas: determining the optimal level of autonomy to grant, managing face loss in the event of AI failure, and promoting innovation while preserving relational harmony. Most culture-change attempts fail because they tweak one element (say, artifacts) while leaving structure, incentives, and management expectations untouched.
A Few Tips to form Concrete Behaviors with AI
Codify AI rules of engagement: when to use AI, when to escalate, how to challenge outputs, and how to record overrides.
Clarify autonomy: define decision rights for humans versus AI, thresholds for human-in-the-loop scenarios, and safe-to-try zones.
Normalize error without face loss: use blameless postmortems, private debriefs for sensitive issues, and “learning credits” for high-quality experiments.
Make logic visible: decision logs, model cards, and “why I decided” notes from leaders.
Create visible artifacts: weekly demo days, “innovation wall” of solved problems, and co-location of data scientists with clinicians and product teams.
Establish rituals and language: start stand-ups with one “assumption to test,” end sprints with one “assumption retired.”
Align incentives: reward high-quality challenges to AI outputs, customer impact, and cross-team help—not just feature count.
Measure culture-performance links: track override rates, cycle time from idea-to-prototype, incident learning rate, and employee trust signals (authenticity, logic, empathy perceptions).
Manage time pressure: pre-define “fast paths” with guardrails so urgency does not erode judgment.
Role clarity for scale: appoint an owner for AI risk and an owner for culture rituals; ensure CEO deployment matches commercialization stage.
Critical Questions Leaders Must Answer
How much autonomy should staff have when using AI? Where is the human-in-the-loop mandatory?
How will the team manage face loss when AI fails? What are the scripts and safeguards?
How do we promote innovation while preserving relational harmony? What norms protect candor and respect?
What signals will we watch weekly to know if culture supports performance?
Innovation Behavior Checklist (Hogan & Coote)
Use this list to reflect on your current presence and its impact on building trust.
Client-focused
Provide clients with services or products that offer unique benefits superior to those of competitors.
Solve clients’ problems in very innovative ways.
Provide innovative ideas and solutions to clients.
Present innovative solutions to our clients.
Seek out novel ways to tackle problems.
Marketing-focused
Develop “revolutionary for the industry” marketing programs for our services or products.
Adopt novel ways to market our firm.
Innovate our marketing programs to stay ahead of the market.
Implement innovative marketing programs.
Technology-focused
Innovate with new software.
Innovate with new technology.
Introduce new integrated systems and technology.
Adopt the latest technology in the industry.
What Aligns Culture and Results
When leaders set clear rules and routines for AI, protect face during failure, and model authenticity, logic, and empathy, teams learn faster, question better, and ship safer. The result is not only funding wins and technical progress, but also repeatable trust—both within the team and with customers.
** Medtech leaders: build trust-centred culture for AI adoption with clear rules, rituals, autonomy, face-saving practices, and measurable behaviours that drive results. Send me your comments on: Which trust mechanism (authenticity, logic, empathy) is hardest to model under time pressure? Try: Use the Hogan & Coote checklist in your next leadership meeting. If you’re a medtech founder looking to systematise trust and innovation, let's connect!
Sources
De Cremer, D., 2025. Employees Won’t Trust AI If They Don’t Trust Their Leaders. Harvard Business Review. https://hbr. org/2025/03/employees-wont-trust-ai-if-they-dont-trust-their-leaders. [Accessed 6 October 2025]
Frei, F.X. & Morriss, A. (2020). Begin with trust. Harvard Business Review, 98(3), pp. 112–121.
Hogan, S.J. and Coote, L.V., 2014. Organizational culture, innovation, and performance: A test of Schein's model. Journal of business research, 67(8), pp.1609-1621.





Comments