💡

Podsumowanie

Według MIT Technology Review - Business, najnowsze rozwój w sektorze general wskazuje na istotne zmiany w krajobrazie technologii biznesowej. {'type': 'text/html', 'language': None, 'base': '', 'value': 'Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power.\n\n\n\nIn this week’s conversation MIT Technology Review’s senior reporter for features and investigations, Eileen Guo, and FT tech correspondent Melissa Heikkilä discuss the privacy implications of our new reliance on chatbots.\n\n\n\n\n\n\n\nEileen Guo writes:\n\n\n\nEven if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up. \n\n\n\nIt’s wild how easily people say these relationships can develop. And multiple studies have found that the more conversational and human-like an AI chatbot is, the more likely it is that we’ll trust it and be influenced by it. This can be dangerous, and the chatbots have been accused of pushing some people toward harmful behaviors—including, in a few extreme examples, suicide. \n\n\n\nSome state governments are taking notice and starting to regulate companion AI. New York requires AI companion companies to create safeguards and report expressions of Dla europejskich MŚP rozwój ten może stanowić okazję do automatyzacji procesów i poprawy efektywności. Szczegółowa analiza AI tymczasowo niedostępna.
📖

Full Article (AI)

{'type': 'text/html', 'language': None, 'base': '', 'value': 'Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power.\n\n\n\nIn this week’s conversation MIT Technology Review’s senior reporter for features and investigations, Eileen Guo, and FT tech correspondent Melissa Heikkilä discuss the privacy implications of our new reliance on chatbots.\n\n\n\n\n\n\n\nEileen Guo writes:\n\n\n\nEven if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up. \n\n\n\nIt’s wild how easily people say these relationships can develop. And multiple studies have found that the more conversational and human-like an AI chatbot is, the more likely it is that we’ll trust it and be influenced by it. This can be dangerous, and the chatbots have been accused of pushing some people toward harmful behaviors—including, in a few extreme examples, suicide. \n\n\n\nSome state governments are taking notice and starting to regulate companion AI. New York requires AI companion companies to create safeguards and report expressions of suicidal ideation, and last month California passed a more detailed bill requiring AI companion companies to protect children and other vulnerable groups. \n\n\n\nBut tellingly, one area the laws fail to address is user privacy.\n\n\n\nThis is despite the fact that AI companions, even more so than other types of generative AI, depend on people to share deeply personal information—from their day-to-day-routines, innermost thoughts, and questions they might not feel comfortable asking real people.\n\n\n\nAfter all, the more users tell their AI companions, the better the bots become at keeping them engaged. This is what MIT researchers Robert Mahari and Pat Pataranutaporn called “addictive intelligence” in an op-ed we published last year, warning that the developers of AI companions make “deliberate design choices … to maximize user engagement.” \n\n\n\n\n\nUltimately, this provides AI companies with something incredibly powerful, not to mention lucrative: a treasure trove of conversational data that can be used to further improve their LLMs. Consider how the venture capital firm Andreessen Horowitz explained it in 2023: \n\n\n\n“Apps such as Character.AI, which both control their models and own the end customer relationship, have a tremendous opportunity to\xa0 generate market value in the emerging AI value stack. In a world where data is limited, companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem.”\n\n\n\nThis personal information is also incredibly valuable to marketers and data brokers. Meta recently announced that it will deliver ads through its AI chatbots. And research conducted this year by the security company Surf Shark found that four out of the five AI companion apps it looked at in the Apple App Store were collecting data such as user or device IDs, which can be combined with third-party data to create profiles for targeted ads. (The only one that said it did not collect data for tracking services was Nomi, which told me earlier this year that it would not “censor” chatbots from giving explicit suicide instructions.) \n\n\n\nAll of this means that the privacy risks posed by these AI companions are, in a sense, required: They are a feature, not a bug. And we haven’t even talked about the additional security risks presented by the way AI chatbots collect and store so much personal information in one place. \n\n\n\nSo, is it possible to have prosocial and privacy-protecting AI companions? That’s an open question. \n\n\n\nWhat do you think, Melissa, and what is top of mind for you when it comes to privacy risks from AI companions? And do things look any different in Europe? \n\n\n\n\n\nMelissa Heikkilä replies:\n\n\n\nThanks, Eileen. I agree with you. If social media was a privacy nightmare, then AI chatbots put the problem on steroids. \n\n\n\nIn many ways, an AI chatbot creates what feels like a much more intimate interaction than a Facebook page. The conversations we have are only with our computers, so there is little risk of your uncle or your crush ever seeing what you write. The AI companies building the models, on the other hand, see everything. \n\n\n\nCompanies are optimizing their AI models for engagement by designing them to be as human-like as possible. But AI developers have several other ways to keep us hooked. The first is sycophancy, or the tendency for chatbots to be overly agreeable. \n\n\n\nThis feature stems from the way the language model behind the chatbots is trained using reinforcement learning. Human data labelers rate the answers generated by the model as either acceptable or not. This teaches the model how to behave. \n\n\n\nBecause people generally like answers that are agreeable, such responses are weighted more heavily in training. \n\n\n\n\n\nAI companies say they use this technique because it helps models become more helpful. But it creates a perverse incentive. \n\n\n\nAfter encouraging us to pour our hearts out to chatbots, companies from Meta to OpenAI are now looking to monetize these conversations. OpenAI recently told us it was looking at a number of ways to meet $1 trillion spending pledges, which included advertising and shopping features. \n\n\n\nAI models are already incredibly persuasive. Researchers at the UK’s AI Security Institute have shown that they are far more skilled than humans at persuading people to change their minds on politics, conspiracy theories, and vaccine skepticism. They do this by generating large amounts of relevant evidence and communicating it in an effective and understandable way. \n\n\n\nThis feature, paired with their sycophancy and a wealth of personal data, could be a powerful tool for advertisers—one that is more manipulative than anything we have seen before. \n\n\n\nBy default, chatbot users are opted in to data collection. Opt-out policies place the onus on users to understand the implications of sharing their information. It’s also unlikely that data already used in training will be removed. \n\n\n\nWe are all part of this phenomenon whether we want to be or not. Social media platforms from Instagram to LinkedIn now use our personal data to train generative AI models. \n\n\n\nCompanies are sitting on treasure troves that consist of our most intimate thoughts and preferences, and language models are very good at picking up on subtle hints in language that could help advertisers profile us better by inferring our age, location, gender, and income level.\n\n\n\nWe are being sold the idea of an omniscient AI digital assistant, a superintelligent confidante. In return, however, there is a very real risk that our information is about to be sent to the highest bidder once again.\n\n\n\nEileen responds:\n\n\n\nI think the comparison between AI companions and social media is both apt and concerning. \n\n\n\nAs Melissa highlighted, the privacy risks presented by AI chatbots aren’t new—they just “put the [privacy] problem on steroids.” AI companions are more intimate and even better optimized for engagement than social media, making it more likely that people will offer up more personal information.\n\n\n\nHere in the US, we are far from solving the privacy issues already presented by social networks and the internet’s ad economy, even without the added risks of AI.\n\n\n\nAnd without regulation, the companies themselves are not following privacy best practices either. One recent study found that the major AI models train their LLMs on user chat data by default unless users opt out, while several don’t offer opt-out mechanisms at all.\n\n\n\nIn an ideal world, the greater risks of companion AI would give more impetus to the privacy fight—but I don’t see any evidence this is happening. \n\n\n\nFurther reading \n\n\n\nFT reporters peer under the hood of OpenAI’s five-year business plan as it tries to meet its vast $1 trillion spending pledges. \n\n\n\nIs it really such a problem if AI chatbots tell people what they want to hear? This FT feature asks what’s wrong with sycophancy \n\n\n\nIn a recent print issue of MIT Technology Review, Rhiannon Williams spoke to a number of people about the types of relationships they are having with AI chatbots.\n\n\n\nEileen broke the story for MIT Technology Review about a chatbot that was encouraging some users to kill themselves.'}
🎯

Wpływ na biznes

Na podstawie raportu MIT Technology Review - Business dotyczącego general, europejskie MŚP powinny monitorować ten rozwój pod kątem automatyzacji.

Interesujące fakty

  • Rozwój technologiczny zgłoszony przez MIT Technology Review - Business
  • Potencjalny wpływ na sektor general
  • Zidentyfikowane możliwości automatyzacji
🚀

Możliwości biznesowe

Rozwój technologiczny zgłoszony przez MIT Technology Review - Business może oferować potencjał automatyzacji procesów. Zalecamy analizę.
🎯

Rekomendacje LAZYSOFT

Skontaktuj się z LAZYSOFT, aby ocenić zastosowanie rozwoju general od MIT Technology Review - Business w Twojej strategii automatyzacji.