Hello. It looks like you�re using an ad blocker that may prevent our website from working properly. To receive the best Tortoise experience possible, please make sure any blockers are switched off and refresh the page.

If you have any questions or need help, let us know at memberhelp@tortoisemedia.com

#TechStates

thinkin

Is Elon Musk a good billionaire?

He’s the world’s richest man and the mastermind behind Tesla, SpaceX, Neuralink and Paypal. His tweets make headlines, impact share prices and attract lawsuits. He’s opposed to a billionaire’s tax and has threatened staff planning to unionise. He operates differently than most other successful tech leaders, yet behind all the controversy it seems Elon Musk is always in pursuit of his higher purpose.Is his vision for humanity – where the problems of sustainable energy are solved and humans are well on the way to becoming a multi-planetary species – just the wild ambitions of an eccentric billionaire showman? Is Elon Musk just another ruthless tycoon or is he a good billionaire? Is there such a thing?In collaboration with TxP, a London-based network bridging the worlds of tech and policy.  editor and invited experts Luke GbedemahReporter Dr Anton HowesInvention historian, author of ‘Age of Invention’ Jacob MchangamaDanish lawyer, human-rights advocate, and author of “Free Speech: A Global History from Socrates to Social Media” Phumzile Van DammeFormer South African MP, Founder, South African Elections Anti-Disinformation Project and member of the Real Facebook Oversight Board Tom ChiversScience writer, author of ‘How to Read Numbers’

thinkin

Does Netflix have a political agenda?

This is a digital ThinkIn.With a rapidly growing global membership base of over 215 million in more than 190 countries, huge sums of money, and a maturing recommendation system, Netflix has significant political and social force. But recent data from YouGov indicates that Netflix’s positive-impression rating among Republicans in the U.S. is falling, down 16% from the beginning of 2018. This comes as a result of the commissioning of left-leaning content, some hires of senior Obama staff, as well as a deal with Obamas’ Higher Ground Productions for original programming. On the other hand, despite public protests and staff walkouts, the streaming giant stood by comedian David Chappelle. Until recently, it appeared Netflix successfully walked a political tightrope, but could a tumbling share price and a slowdown in new subscribers signal a change of direction? Does profit mean more than politics or social change?  editor and invited experts James HardingCo-founder and Editor Gina KeatingAuthor of “Netflixed: The Epic Battle for America’s Eyeballs.” She has written about media, law, and government as a staff writer for Reuters and United Press International for more than a decade Karen McNallyReader in American Film, Television and Cultural History at London Metropolitan University Lucas ShawReporter for Bloomberg, Leader of for media, telecom, entertainment team, and author of newsletter on Hollywood called “Screentime: A front-row seat to the collision of Hollywood and Silicon Valley.” Micheal FlahertyCo-founder and president of Walden Media, Producer of Netflix vs. The World

thinkin

Is Instagram bad for you?

This is a digital-only ThinkIn. Join us for a discussion about Instagram – especially its impact on young people, mental health and body image. How much was Facebook aware of the problems Instagram is causing, and to what extent did profit outweigh people when considering the influence the platform has on young people? Instagram’s latest global ad campaign is based on the strapline that a users’ identity is “yours to make”, a hollow claim when you see the pressure social platforms put on the self esteem and body image of young people. Is Instagram inherently flawed as an idea, or are there ways to fix the platform without damaging its popularity? As it turns a decade old, with popular competitors stealing its audience, is Instagram on its way out anyway? And how have the real impacts of Instagram on people escaped scrutiny for so long? editor and invited experts Luke Gbedemah Reporter Dr. Lis Sylvan Managing Director of the Berkman Klein Center for Internet & Society at Harvard University Ian Russell Chair, Molly Rose Foundation Kyle Dent Head of AI Ethics, Checkstep

thinkin

In conversation with Google’s Matt Brittin

This is a digital-only ThinkIn. As one of the superpowers of the internet, Google’s technology, tools and services  touch the daily lives of billions of people who go online every day. Google’s enormous presence and influence comes with great responsibility. Society expects higher standards from corporations and Governments are paying closer attention than ever to how big tech operates. Matt Brittin will be in conversation with James Harding, Tortoise editor and co-founder, about how Google approaches this responsibility, and the role digital technology, tools and skills can play in enabling a sustainable and inclusive recovery from Covid19 – from tackling climate change to supporting an evolving labour market. editor and invited experts James Harding Co-Founder and Editor Matt Brittin President of EMEA Business & Operations for Google

thinkin

Tech States: what we’ve learnt, and what next?

This is a digital-only ThinkIn.For our last Open News meetings of the year, Tortoise journalists and members, with expert contributors who’ve worked with us throughout 2021, will come together to take stock of what our reporters have uncovered about what’s driven the news this year, and what it has told us about the forces that are shaping our world. What have we learned? What questions remain unanswered, and what new ones have arisen? At Tortoise, we always said that we would stay interested when the rest of the news media moves on. We start with Tech States, our ongoing investigative work into how the big technology companies operate, and the way the decisions of these few, giant private companies are changing our lives, and our democracies. How will Meta, otherwise known as Facebook, shape 2022 and beyond? editor Luke Gbedemah

thinkin

Will technology widen the power gap? In conversation with Azeem Azhar

This is a newsroom ThinkIn. In-person and digital-only tickets are available.Azeem Azhar, writer, entrepreneur and creator of the hit Exponential View newsletter and podcast – argues that accelerating technology risks leaving our social institutions behind, with devastating implications for our way of life. His newsletter, Exponential View is regarded as one of the best researched and most thought-provoking newsletters in tech. In his new book, Azhar draws on nearly three decades of conversations with the world’s leading thinkers, to outline models that explain the effects technology is having on society. New technology, he shows, is developing at an increasing, exponential rate. But human-built institutions – from our businesses to our political norms – can only ever adapt at a slower, incremental pace. The result is an ‘exponential gap’ – between the power of new technology and our ability to keep up.  Pre-order Azeem’s book Exponential: How Accelerating Technology Is Leaving Us Behind and What to Do About It. editor Alexi MostrousInvestigations Editor

thinkin

Sensemaker Live: Can’t we just ditch social media and get along?

Long stories short Alibaba founder Jack Ma resurfaced in mainland China after reportedly spending more than a year overseas. South Korean police said they had arrested Do Kwo, the boss behind the terraUSD and Luna cryptocurrency fraud. US House Speaker Kevin McCarthy said lawmakers “will be moving forward” with a TikTok bill. Tall tales Last week Google released its chatbot Bard – an answer to OpenAI’s ChatGPT – with almost as many caveats as capabilities. Companies like Google and Microsoft are rushing to deploy large language models even though problems with hallucinations and misinformation remain unaddressed. So what? Generative artificial intelligence may become a huge market. It could threaten 300 million jobs and boost gross domestic product around the world by 7 per cent in the next decade, according to Goldman Sachs. Google’s deployment of Bard shows that companies are taking precautions when it comes to the flaws in their models – but releasing them into the market nonetheless. What could go wrong? Sundar Pichai, the company’s CEO, told employees that “things will go wrong” with Bard, after an initial public demonstration of the new product showed it spitting out falsehoods. Pichai’s hope is that users will eventually make Bard better by giving it feedback. Generative models, including Bard and ChatGPT, can say things with confidence that aren’t true (called “hallucinated responses”);create deceptive images or texts (like this picture of Pope Francis);be manipulated into expressing biased or harmful views;have logic failures; andbe rude. Users of Google’s new language model are reminded that it might get things wrong and encouraged to turn to Google’s search products in order to verify its statements. This amounts to telling users: “abandon trust, all ye who type here,” as James Vincent put it in The Verge. A relatively innocuous example: Bard told one user that the months that follow January and February are “Maruary, Apruary, Mayuary…” Google says it has solicited feedback from 10,000 testers “from a variety of backgrounds and perspectives” to combat this issue, but ultimately accepts that problems will arise. Instead of ensuring Bard’s results were as reliable as those generated by Google’s eponymous search engine, the chatbot was pushed to a public waiting list in order to compete with Microsoft-backed ChatGPT, which can do a lot of the same stuff as Bard but without access to the Google Search data infrastructure. Beyond Bard, more untested and unfinished applications from other companies are likely to follow. Why? The huge potential of the technology is driving growth and incentivising companies to get their foot in the door as soon as possible. Bill Gates, founder of Microsoft and still an advisor to the company, has said that “entire industries will reorient” around generative models. The sense of an accelerating race to deploy is palpable. By the numbers 29 – per cent, the proportion of Gen Z in the US already using generative models at work. 100 – million, the number of users acquired by ChatGPT two months after launch. $11 – billion, total investment in OpenAI by Microsoft. $100 – billion, share value loss at Google after its failed Bard demo. What’s the problem? Super-charged misinformation and the atrophy of human intelligence. By regurgitating information that is already on the internet, generative models cannot decide what is a good thing to tell a human and will repeat past mistakes made by humans, of which there are plenty. “Companies like Google and Microsoft may be the only players capable of dealing with the darker side of these models,” said Simon Greenman, a partner at BestPracticeAI and member of the World Economic Forum’s Global AI Council, “but new types of attack, and new issues will emerge, like prompt injections that circumvent guardrails, prejudice against specific groups or more erroneous and offensive answers.” What’s the solution? Possibly, keeping humans in the loop. “Large language models are remarkably creative, at the extreme this leads to hallucination. The way to solve this is to combine the creative power of the models with traditional ways of verifying knowledge: verifying sources and using human experts as fact checkers,” said Sean Williams, the founder of AutogenAI. “Using human experts is about creating a brilliant interface between humans and the large language model.” Generative artificial intelligence is still in its infancy, despite the recent breakthroughs. But the systems for controlling its impact on people need to be figured out sooner rather than later. As Sam Altman, CEO of OpenAI, put it: “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.” Academics are arguing that regulations, like those currently being debated in the European Parliament or the White House’s AI Bill of Rights, are not ready to cope with the boom in generative artificial intelligence, and the misinformation it could create. Google named its model Bard, after the storytellers of old – known for creating believable but fantastical stories. For now – with companies scrambling to use the public as testers and competing to release the next big application – some models are doing just that: spinning tall tales. Apple Possible headacheEmployees at Apple have expressed concerns that the company’s augmented reality headset will miss the mark on price and usefulness. According to the New York Times, eight current and former staff say the headset is viewed with scepticism within the company – a stark difference from the enthusiasm with which employees have viewed previous landmark releases. Part of the problem is price. The device is tipped to retail at $3,000, and employees are unsure whether the market for such a product really exists. But the world’s most valuable company has always had a knack for creating new demand. So if history is any guide, Apple’s foray into virtual reality will be a market-making success. Meta Addicted usersMeta is facing a lawsuit filed by the San Mateo County Board of Education in California. The suit – which also names Google, Snap and TikTok – alleges that the platforms have actively engaged in getting children addicted to social media, negatively affecting their behaviour and wellbeing. The filing describes Meta’s “end goal” of making “young people engage with and stay on the platforms as long as possible… This is best accomplished by catering an endless flow of content that is most provocative and toxic.” Meta has yet to respond, but has faced similar criticism many times in the past. It tends to point to its child safety and fact checking features.  Microsoft U-turnThe UK competition authority has revised a previous decision about the impact of Microsoft’s acquisition of Activision Blizzard. The CMA had determined that Microsoft could profit from withholding Call of Duty games from the platforms of its main rival, Sony. Last week it changed that decision and determined that the acquisition of Activision Blizzard and Call of Duty would “not materially affect” competition with Sony – which has a large catalogue of its own games. Microsoft had alleged that the CMA made errors in its calculations, and that those calculations were based on erroneous assumptions about Microsoft’s Xbox business. It seems they may have been right. The CMA’s decision is just one part of the considerations around the deal, and the final ruling is not due until the end of April.  Tencent Well positionedTencent has reversed two successive quarters of revenue declines. CEO Pony Ma optimistically said the company is “well positioned to benefit from a rebound in China’s economic growth” with the end of the country’s zero-Covid policy. The positive attitude also comes from the government’s recent pledge to support China’s internet giants by easing its regulatory crackdown, as well as Tencent’s successful investments in WeChat’s short video function and overseas gaming. Effectively, as per the FT, Tencent posted a quarterly revenue of Rmb 145 billion ($21 billion) in the three months to December, showing a 0.5 per cent increase from the same period a year earlier, and a 19 per cent net profit rise.   Google Ghost workersSome of Google’s Raters have been given a pay increase from $14.50 to $15 per hour, according to NPR. Content moderators drive Google Search results (Google calls them Raters), and ensure that information is accurate and not distressing. The Alphabet Workers Union estimates that there are 200,000 Google Raters worldwide, meaning they would make up more than half of the company’s workforce. But, as unseen “ghost workers”, they are often managed through third-party contractors and are not entitled to the health insurance, paid leave or pension contributions that other Google employees get. Content labelling and moderation is vital for optimising language models like Bard.  Amazon Cloud chasingAmazon will offer artificial intelligence startups $300,000 of free cloud computing resources if they sign up to its Amazon Web Services platform, according to the WSJ. Amazon is chasing customers for its cloud services in the new and booming generative model industry. The rapid growth of businesses like OpenAI and Anthropic – which build models like the ones powering Bard, Bing and ChatGPT – has opened a path for cloud service providers like Amazon to grow too. Large models depend on cloud-based hosting services, and are highly demanding in terms of data. Amazon is set to capitalise.  Thanks for reading. Please tell your friends to sign up, send us ideas and let us know what you think. Email sensemaker@tortoisemedia.com. Luke Gbedemah Additional reporting by Alexi Mostrous & Serena Cesareo

thinkin

Banning Trump: did Facebook call it right?

Long stories short Alibaba founder Jack Ma resurfaced in mainland China after reportedly spending more than a year overseas. South Korean police said they had arrested Do Kwo, the boss behind the terraUSD and Luna cryptocurrency fraud. US House Speaker Kevin McCarthy said lawmakers “will be moving forward” with a TikTok bill. Tall tales Last week Google released its chatbot Bard – an answer to OpenAI’s ChatGPT – with almost as many caveats as capabilities. Companies like Google and Microsoft are rushing to deploy large language models even though problems with hallucinations and misinformation remain unaddressed. So what? Generative artificial intelligence may become a huge market. It could threaten 300 million jobs and boost gross domestic product around the world by 7 per cent in the next decade, according to Goldman Sachs. Google’s deployment of Bard shows that companies are taking precautions when it comes to the flaws in their models – but releasing them into the market nonetheless. What could go wrong? Sundar Pichai, the company’s CEO, told employees that “things will go wrong” with Bard, after an initial public demonstration of the new product showed it spitting out falsehoods. Pichai’s hope is that users will eventually make Bard better by giving it feedback. Generative models, including Bard and ChatGPT, can say things with confidence that aren’t true (called “hallucinated responses”);create deceptive images or texts (like this picture of Pope Francis);be manipulated into expressing biased or harmful views;have logic failures; andbe rude. Users of Google’s new language model are reminded that it might get things wrong and encouraged to turn to Google’s search products in order to verify its statements. This amounts to telling users: “abandon trust, all ye who type here,” as James Vincent put it in The Verge. A relatively innocuous example: Bard told one user that the months that follow January and February are “Maruary, Apruary, Mayuary…” Google says it has solicited feedback from 10,000 testers “from a variety of backgrounds and perspectives” to combat this issue, but ultimately accepts that problems will arise. Instead of ensuring Bard’s results were as reliable as those generated by Google’s eponymous search engine, the chatbot was pushed to a public waiting list in order to compete with Microsoft-backed ChatGPT, which can do a lot of the same stuff as Bard but without access to the Google Search data infrastructure. Beyond Bard, more untested and unfinished applications from other companies are likely to follow. Why? The huge potential of the technology is driving growth and incentivising companies to get their foot in the door as soon as possible. Bill Gates, founder of Microsoft and still an advisor to the company, has said that “entire industries will reorient” around generative models. The sense of an accelerating race to deploy is palpable. By the numbers 29 – per cent, the proportion of Gen Z in the US already using generative models at work. 100 – million, the number of users acquired by ChatGPT two months after launch. $11 – billion, total investment in OpenAI by Microsoft. $100 – billion, share value loss at Google after its failed Bard demo. What’s the problem? Super-charged misinformation and the atrophy of human intelligence. By regurgitating information that is already on the internet, generative models cannot decide what is a good thing to tell a human and will repeat past mistakes made by humans, of which there are plenty. “Companies like Google and Microsoft may be the only players capable of dealing with the darker side of these models,” said Simon Greenman, a partner at BestPracticeAI and member of the World Economic Forum’s Global AI Council, “but new types of attack, and new issues will emerge, like prompt injections that circumvent guardrails, prejudice against specific groups or more erroneous and offensive answers.” What’s the solution? Possibly, keeping humans in the loop. “Large language models are remarkably creative, at the extreme this leads to hallucination. The way to solve this is to combine the creative power of the models with traditional ways of verifying knowledge: verifying sources and using human experts as fact checkers,” said Sean Williams, the founder of AutogenAI. “Using human experts is about creating a brilliant interface between humans and the large language model.” Generative artificial intelligence is still in its infancy, despite the recent breakthroughs. But the systems for controlling its impact on people need to be figured out sooner rather than later. As Sam Altman, CEO of OpenAI, put it: “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.” Academics are arguing that regulations, like those currently being debated in the European Parliament or the White House’s AI Bill of Rights, are not ready to cope with the boom in generative artificial intelligence, and the misinformation it could create. Google named its model Bard, after the storytellers of old – known for creating believable but fantastical stories. For now – with companies scrambling to use the public as testers and competing to release the next big application – some models are doing just that: spinning tall tales. Apple Possible headacheEmployees at Apple have expressed concerns that the company’s augmented reality headset will miss the mark on price and usefulness. According to the New York Times, eight current and former staff say the headset is viewed with scepticism within the company – a stark difference from the enthusiasm with which employees have viewed previous landmark releases. Part of the problem is price. The device is tipped to retail at $3,000, and employees are unsure whether the market for such a product really exists. But the world’s most valuable company has always had a knack for creating new demand. So if history is any guide, Apple’s foray into virtual reality will be a market-making success. Meta Addicted usersMeta is facing a lawsuit filed by the San Mateo County Board of Education in California. The suit – which also names Google, Snap and TikTok – alleges that the platforms have actively engaged in getting children addicted to social media, negatively affecting their behaviour and wellbeing. The filing describes Meta’s “end goal” of making “young people engage with and stay on the platforms as long as possible… This is best accomplished by catering an endless flow of content that is most provocative and toxic.” Meta has yet to respond, but has faced similar criticism many times in the past. It tends to point to its child safety and fact checking features.  Microsoft U-turnThe UK competition authority has revised a previous decision about the impact of Microsoft’s acquisition of Activision Blizzard. The CMA had determined that Microsoft could profit from withholding Call of Duty games from the platforms of its main rival, Sony. Last week it changed that decision and determined that the acquisition of Activision Blizzard and Call of Duty would “not materially affect” competition with Sony – which has a large catalogue of its own games. Microsoft had alleged that the CMA made errors in its calculations, and that those calculations were based on erroneous assumptions about Microsoft’s Xbox business. It seems they may have been right. The CMA’s decision is just one part of the considerations around the deal, and the final ruling is not due until the end of April.  Tencent Well positionedTencent has reversed two successive quarters of revenue declines. CEO Pony Ma optimistically said the company is “well positioned to benefit from a rebound in China’s economic growth” with the end of the country’s zero-Covid policy. The positive attitude also comes from the government’s recent pledge to support China’s internet giants by easing its regulatory crackdown, as well as Tencent’s successful investments in WeChat’s short video function and overseas gaming. Effectively, as per the FT, Tencent posted a quarterly revenue of Rmb 145 billion ($21 billion) in the three months to December, showing a 0.5 per cent increase from the same period a year earlier, and a 19 per cent net profit rise.   Google Ghost workersSome of Google’s Raters have been given a pay increase from $14.50 to $15 per hour, according to NPR. Content moderators drive Google Search results (Google calls them Raters), and ensure that information is accurate and not distressing. The Alphabet Workers Union estimates that there are 200,000 Google Raters worldwide, meaning they would make up more than half of the company’s workforce. But, as unseen “ghost workers”, they are often managed through third-party contractors and are not entitled to the health insurance, paid leave or pension contributions that other Google employees get. Content labelling and moderation is vital for optimising language models like Bard.  Amazon Cloud chasingAmazon will offer artificial intelligence startups $300,000 of free cloud computing resources if they sign up to its Amazon Web Services platform, according to the WSJ. Amazon is chasing customers for its cloud services in the new and booming generative model industry. The rapid growth of businesses like OpenAI and Anthropic – which build models like the ones powering Bard, Bing and ChatGPT – has opened a path for cloud service providers like Amazon to grow too. Large models depend on cloud-based hosting services, and are highly demanding in terms of data. Amazon is set to capitalise.  Thanks for reading. Please tell your friends to sign up, send us ideas and let us know what you think. Email sensemaker@tortoisemedia.com. Luke Gbedemah Additional reporting by Alexi Mostrous & Serena Cesareo

thinkin

Toxicity in tech: Why are Google’s leading AI ethics researchers being ‘silenced’?

Long stories short Alibaba founder Jack Ma resurfaced in mainland China after reportedly spending more than a year overseas. South Korean police said they had arrested Do Kwo, the boss behind the terraUSD and Luna cryptocurrency fraud. US House Speaker Kevin McCarthy said lawmakers “will be moving forward” with a TikTok bill. Tall tales Last week Google released its chatbot Bard – an answer to OpenAI’s ChatGPT – with almost as many caveats as capabilities. Companies like Google and Microsoft are rushing to deploy large language models even though problems with hallucinations and misinformation remain unaddressed. So what? Generative artificial intelligence may become a huge market. It could threaten 300 million jobs and boost gross domestic product around the world by 7 per cent in the next decade, according to Goldman Sachs. Google’s deployment of Bard shows that companies are taking precautions when it comes to the flaws in their models – but releasing them into the market nonetheless. What could go wrong? Sundar Pichai, the company’s CEO, told employees that “things will go wrong” with Bard, after an initial public demonstration of the new product showed it spitting out falsehoods. Pichai’s hope is that users will eventually make Bard better by giving it feedback. Generative models, including Bard and ChatGPT, can say things with confidence that aren’t true (called “hallucinated responses”);create deceptive images or texts (like this picture of Pope Francis);be manipulated into expressing biased or harmful views;have logic failures; andbe rude. Users of Google’s new language model are reminded that it might get things wrong and encouraged to turn to Google’s search products in order to verify its statements. This amounts to telling users: “abandon trust, all ye who type here,” as James Vincent put it in The Verge. A relatively innocuous example: Bard told one user that the months that follow January and February are “Maruary, Apruary, Mayuary…” Google says it has solicited feedback from 10,000 testers “from a variety of backgrounds and perspectives” to combat this issue, but ultimately accepts that problems will arise. Instead of ensuring Bard’s results were as reliable as those generated by Google’s eponymous search engine, the chatbot was pushed to a public waiting list in order to compete with Microsoft-backed ChatGPT, which can do a lot of the same stuff as Bard but without access to the Google Search data infrastructure. Beyond Bard, more untested and unfinished applications from other companies are likely to follow. Why? The huge potential of the technology is driving growth and incentivising companies to get their foot in the door as soon as possible. Bill Gates, founder of Microsoft and still an advisor to the company, has said that “entire industries will reorient” around generative models. The sense of an accelerating race to deploy is palpable. By the numbers 29 – per cent, the proportion of Gen Z in the US already using generative models at work. 100 – million, the number of users acquired by ChatGPT two months after launch. $11 – billion, total investment in OpenAI by Microsoft. $100 – billion, share value loss at Google after its failed Bard demo. What’s the problem? Super-charged misinformation and the atrophy of human intelligence. By regurgitating information that is already on the internet, generative models cannot decide what is a good thing to tell a human and will repeat past mistakes made by humans, of which there are plenty. “Companies like Google and Microsoft may be the only players capable of dealing with the darker side of these models,” said Simon Greenman, a partner at BestPracticeAI and member of the World Economic Forum’s Global AI Council, “but new types of attack, and new issues will emerge, like prompt injections that circumvent guardrails, prejudice against specific groups or more erroneous and offensive answers.” What’s the solution? Possibly, keeping humans in the loop. “Large language models are remarkably creative, at the extreme this leads to hallucination. The way to solve this is to combine the creative power of the models with traditional ways of verifying knowledge: verifying sources and using human experts as fact checkers,” said Sean Williams, the founder of AutogenAI. “Using human experts is about creating a brilliant interface between humans and the large language model.” Generative artificial intelligence is still in its infancy, despite the recent breakthroughs. But the systems for controlling its impact on people need to be figured out sooner rather than later. As Sam Altman, CEO of OpenAI, put it: “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.” Academics are arguing that regulations, like those currently being debated in the European Parliament or the White House’s AI Bill of Rights, are not ready to cope with the boom in generative artificial intelligence, and the misinformation it could create. Google named its model Bard, after the storytellers of old – known for creating believable but fantastical stories. For now – with companies scrambling to use the public as testers and competing to release the next big application – some models are doing just that: spinning tall tales. Apple Possible headacheEmployees at Apple have expressed concerns that the company’s augmented reality headset will miss the mark on price and usefulness. According to the New York Times, eight current and former staff say the headset is viewed with scepticism within the company – a stark difference from the enthusiasm with which employees have viewed previous landmark releases. Part of the problem is price. The device is tipped to retail at $3,000, and employees are unsure whether the market for such a product really exists. But the world’s most valuable company has always had a knack for creating new demand. So if history is any guide, Apple’s foray into virtual reality will be a market-making success. Meta Addicted usersMeta is facing a lawsuit filed by the San Mateo County Board of Education in California. The suit – which also names Google, Snap and TikTok – alleges that the platforms have actively engaged in getting children addicted to social media, negatively affecting their behaviour and wellbeing. The filing describes Meta’s “end goal” of making “young people engage with and stay on the platforms as long as possible… This is best accomplished by catering an endless flow of content that is most provocative and toxic.” Meta has yet to respond, but has faced similar criticism many times in the past. It tends to point to its child safety and fact checking features.  Microsoft U-turnThe UK competition authority has revised a previous decision about the impact of Microsoft’s acquisition of Activision Blizzard. The CMA had determined that Microsoft could profit from withholding Call of Duty games from the platforms of its main rival, Sony. Last week it changed that decision and determined that the acquisition of Activision Blizzard and Call of Duty would “not materially affect” competition with Sony – which has a large catalogue of its own games. Microsoft had alleged that the CMA made errors in its calculations, and that those calculations were based on erroneous assumptions about Microsoft’s Xbox business. It seems they may have been right. The CMA’s decision is just one part of the considerations around the deal, and the final ruling is not due until the end of April.  Tencent Well positionedTencent has reversed two successive quarters of revenue declines. CEO Pony Ma optimistically said the company is “well positioned to benefit from a rebound in China’s economic growth” with the end of the country’s zero-Covid policy. The positive attitude also comes from the government’s recent pledge to support China’s internet giants by easing its regulatory crackdown, as well as Tencent’s successful investments in WeChat’s short video function and overseas gaming. Effectively, as per the FT, Tencent posted a quarterly revenue of Rmb 145 billion ($21 billion) in the three months to December, showing a 0.5 per cent increase from the same period a year earlier, and a 19 per cent net profit rise.   Google Ghost workersSome of Google’s Raters have been given a pay increase from $14.50 to $15 per hour, according to NPR. Content moderators drive Google Search results (Google calls them Raters), and ensure that information is accurate and not distressing. The Alphabet Workers Union estimates that there are 200,000 Google Raters worldwide, meaning they would make up more than half of the company’s workforce. But, as unseen “ghost workers”, they are often managed through third-party contractors and are not entitled to the health insurance, paid leave or pension contributions that other Google employees get. Content labelling and moderation is vital for optimising language models like Bard.  Amazon Cloud chasingAmazon will offer artificial intelligence startups $300,000 of free cloud computing resources if they sign up to its Amazon Web Services platform, according to the WSJ. Amazon is chasing customers for its cloud services in the new and booming generative model industry. The rapid growth of businesses like OpenAI and Anthropic – which build models like the ones powering Bard, Bing and ChatGPT – has opened a path for cloud service providers like Amazon to grow too. Large models depend on cloud-based hosting services, and are highly demanding in terms of data. Amazon is set to capitalise.  Thanks for reading. Please tell your friends to sign up, send us ideas and let us know what you think. Email sensemaker@tortoisemedia.com. Luke Gbedemah Additional reporting by Alexi Mostrous & Serena Cesareo