How To Realize Efficiency From AI Without Compromising Control and Safety – Sociotechnical Envelopment

By Dr. Esko Penttinen, Associate Professor at Aalto University, Academic Research Fellow at Center for Digital Business Growth

Many organizations face a growing tension with AI: the most powerful systems are often the hardest to explain and understand. This creates a dilemma – how to capture their value without compromising control, accountability, or regulatory compliance. A practical answer is sociotechnical envelopment. Rather than trying to make every model fully explainable, this approach focuses on designing the broader system around it – people, processes, data flows, and technical architecture – so that even “black box” AI operates within controlled, well-understood boundaries. By shifting the focus from the model to the system, organizations can safely expand the use of advanced AI. This is particularly critical in complex environments such as the public sector, where the need for performance gains is high, but so are the demands for transparency and accountability.

Based on Research

Asatiani, A., Malo, P., Nagbøl, P. R., Penttinen, E., Rinta-Kahila, T., & Salovaara, A. (2021). Sociotechnical envelopment of artificial intelligence: An approach to organizational deployment of inscrutable artificial intelligence systems. Journal of the Association for Information Systems, 22(2), 325-352.

Asatiani, A., Malo, P., Nagbøl, P. R., Penttinen, E., Rinta-Kahila, T., & Salovaara, A. (2020). Challenges of explaining the behavior of black-box AI systems. MIS Quarterly Executive, 19(4), 259-278.

Asatiani, A., Hakkarainen, T., Paaso, K., & Penttinen, E. (2024). Security by envelopment–a novel approach to data-security-oriented configuration of lightweight-automation systems. European Journal of Information Systems, 33(5), 631-653.

Learn more about Esko Penttinen’s research.