AI Plays Doctor, Lawyer, Janitor: What All Is It Doing?

AI to creep into medical, legal and operational domains? What all can it do? Read to know more.

AI Plays Doctor, Lawyer, Janitor: What All Is It Doing?
Photo by Mohamed Nohassi / Unsplash

The age of AI used to be imagined as a single big breakthrough moment or a point where machines would suddenly take over a domain. But 2025 looks different. AI systems are entering thousands of job categories quietly, almost sideways. They write legal briefs, summarise clinical notes, triage patient cases, generate inventory pick-lists in warehouses, resolve compliance tickets, scan CCTV for safety violations, and build quarterly sales decks. They work inside the flow of labour instead of replacing labour outright.

What’s interesting is not just the capability, but the ambiguity. These models perform tasks across professions without ever having a clear sense of which profession they’re in. They float between roles. Today, a single model inside a Fortune 500 enterprise might perform micro-work that touches medicine, law, HR, logistics, and customer care, all in the same hour.

Task Competence Without Context

Large AI systems can perform medical summarisation, assist in legal research, and generate cleaning schedules for high-traffic buildings. These are not trivial demonstrations; they reflect the ability of models to operate across highly differentiated domains. Yet the machine does not possess domain identity. It only responds to pattern structures.

This creates a new class of work abstraction: cross-domain competence with zero occupational self-awareness. The model does not know that it has switched from advising on clinical risk factors to drafting questions for union grievance cases. The task space is fluid, but the model has no conceptual frame for contextual continuity.

The Boundary Question in Multi-Domain AI

This raises a new operational consideration: how do we define the limits of what a model should do within an organisation? When a model shifts between multiple skill layers, it becomes difficult to allocate responsibility.

A hospital can define when and how a model participates in clinical summarisation. A law firm can define how and when the model supports discovery workflows. But both domains share the same underlying pattern engine. This creates new governance gaps because the model doesn’t contain the concept of scope. Scope becomes an external rule.

What “Occupational Identity” Means in AI Governance

Future regulatory models may need to formalise domain boundaries not based on capabilities, but based on permitted application slots. The question will not be: “Can the model perform this task?” but rather: “Is the model authorised to perform this task?”

If AI systems become general-purpose service engines capable of participating in every labour category, then the occupational identity of the model is operationally meaningless. What matters is the rules around application. The lines will be drawn not at the level of capability, but at the level of allocation.

Conclusion

AI can perform functions across multiple labour categories without possessing any concept of occupation. The challenge ahead is not model accuracy, it is model containment. Organisations will need to assign boundaries to machine participation based on risk, not talent, and these boundaries will determine how AI is allowed to behave in future work infrastructures.