The privacy focus would be around data leaks and regulatory compliance. By making ChatGPT available for enterprise use, Microsoft could significantly expand businesses’ AI capabilities and assist in boosting their revenue. However, this move also raises important considerations surrounding equitable access and AI safety, along with a substantial price tag.
The cutting-edge AI tool shows promise in advancing natural language processing with its ability to produce human-like language. Recognizing ChatGPT’s potential, Microsoft has been pushing to commercially license the model to businesses. This shift to private use would allow companies to integrate ChatGPT into applications and services, such as automated customer service agents, interactive content creation software, and more.
For businesses, integrating a powerful AI model like ChatGPT could enable more advanced and capable applications, potentially improving processes and customer experiences. The trade-off is that these private instances of ChatGPT could cost businesses as much as ten times what they currently pay to use the standard version of ChatGPT. The premium costs of licensing ChatGPT would grant companies exclusive access to the state-of-the-art AI with an emphasis on mitigating data leaks and sticking to regulations, but not every company can afford to pay for a “maybe.”
For instance, Samsung has forbidden the use of these AI chatbots after an employee entered “internal source code” onto ChatGPT on their work device this April. The privatization of ChatGPT could allow employees the opportunity to take advantage of the positives of chatbots and deep learning models without the risk of leaking confidential information.
That said, other companies across multiple industries, like Verizon, JPMorgan, and Goldman Sachs, have already taken steps limit the use of AI models in their work environment. A privatized version will find footing and hold appeal, but the cost may be greater than a financial hit. Your security and input data could be at risk if everything isn’t up to par.
It will run off of a subscription where input fed to ChatGPT by a business’s employees and customers won’t be used to train its language model by default. The difference is that Microsoft’s version will use the company’s Azure platform as its backend rather than competing platforms like Amazon Web Services. Microsoft likes to keep everything in-house, after all.
It sounds like a dream come true.
ChatGPT that offers all of the benefits of artificial intelligence and AI training without the threat of outside data contamination or the accidental spread of company information through the neural network. However, as mentioned previously, not all that glitters is made of gold.
For starters, the price tag is steep. While it’s still up in the air on what the final figure will look like, paying ten times your current services for a potentially more consistent service is wishful thinking. The likelier outcome is that Microsoft will release the same service as before with new guardrails in place but use their marketing team to make it seem shinier than before to help justify the new price tag.
Second, how safe are these guardrails? While AI models are rapidly increasing in complexity and usefulness as time goes on, security is still a real concern. A concern so real that many companies currently have an internal ban on the use of AI in certain cases for fear of data leaks. Is Microsoft (and ChatGPT) the genie in the bottle you’ve been wishing for, that will grant your wishes and give you privatized access to the best machine learning and AI models in the industry?
The more realistic outcome is that instead of a genie in a bottle you receive a flashy AI vending machine. You drop an obscene amount of money, pick your poison, and get a personalized answer that is based on your internal training data, AI workloads, and data collection. Does this taste like a win? Only time will tell, but based on Microsoft’s proclivity for money over function, they have a lot to live up to this time around.