Threat is all about context
Threat is all about context. Actually, one of many greatest dangers is failing to acknowledge or perceive your context: That’s why that you must start there when evaluating threat.
That is notably vital when it comes to popularity. Suppose, as an illustration, about your clients and their expectations. How may they really feel about interacting with an AI chatbot? How damaging may it’s to supply them with false or deceptive data? Perhaps minor buyer inconvenience is one thing you may deal with, however what if it has a big well being or monetary impression?
Even when implementing AI appears to make sense, there are clearly some downstream popularity dangers that must be thought-about. We’ve spent years speaking in regards to the significance of person expertise and being customer-focused: Whereas AI may assist us right here, it may additionally undermine these issues as nicely.
There’s an analogous query to be requested about your groups. AI could have the capability to drive effectivity and make individuals’s work simpler, however used within the mistaken means it may severely disrupt present methods of working. The trade is speaking lots about developer expertise just lately—it’s one thing I wrote about for this publication—and the selections organizations make about AI want to enhance the experiences of groups, not undermine them.
Within the newest version of the Thoughtworks Technology Radar—a biannual snapshot of the software program trade primarily based on our experiences working with purchasers around the globe—we speak about exactly this level. We name out AI team assistants as probably the most thrilling rising areas in software program engineering, however we additionally be aware that the main focus needs to be on enabling groups, not people. “You ought to be on the lookout for methods to create AI crew assistants to assist create the ‘10x crew,’ versus a bunch of siloed AI-assisted 10x engineers,” we are saying within the newest report.
Failing to heed the working context of your groups may trigger important reputational injury. Some bullish organizations may see this as half and parcel of innovation—it’s not. It’s exhibiting potential workers—notably extremely technical ones—that you simply don’t actually perceive or care in regards to the work they do.
Tackling threat by smarter know-how implementation
There are many instruments that can be utilized to assist handle threat. Thoughtworks helped put collectively the Responsible Technology Playbook, a set of instruments and methods that organizations can use to make extra accountable selections about know-how (not simply AI).
Nonetheless, it’s vital to notice that managing dangers—notably these round popularity—requires actual consideration to the specifics of know-how implementation. This was notably clear in work we did with an assortment of Indian civil society organizations, creating a social welfare chatbot that residents can work together with of their native languages. The dangers right here weren’t not like these mentioned earlier: The context during which the chatbot was getting used (as help for accessing very important providers) meant that wrong or “hallucinated” data may cease individuals from getting the sources they rely on.