Gpt 4

Latest news headlines about artificial intelligence

OpenAI to Release Thinking 'Strawberry' AI Model Within 2 Weeks

Sept. 11, 2024, 10:51 a.m. • PYMNTS.com • (2 Minute Read)
OpenAI is set to unveil its latest artificial intelligence model, Strawberry, in the coming two weeks, according to reports from Seeking Alpha and The Information, cited by PYMNTS. The new model is designed to demonstrate advanced reasoning capabilities and improve its ability to comprehend and process complex information. OpenAI's efforts to push the boundaries of AI capabilities aim to revolutionize various sectors, including supply chain management, market forecasting, and customer experience personalization. With more than 1 million paying business users utilizing OpenAI's products, the company is reportedly seeking to raise "several billion dollars" in funding to maintain its position at the forefront of AI innovation.

MI6 and CIA using generative AI to combat tech-driven threat actors

Sept. 9, 2024, 6:34 a.m. • The Register • (4 Minute Read)
In a rare joint statement, CIA director Bill Burns and UK Secret Intelligence Service (SIS) chief Richard Moore revealed that their agencies have adopted generative AI to combat tech-driven threats from Russia and China. They stated that AI, including generative AI, is being used to improve intelligence activities, protect operations, and analyze vast amounts of data. The use of cloud technologies and collaboration with innovative companies were also emphasized. Moore disclosed that MI6 uses large language models to navigate extremist content and decode criminal vernacular on the internet. The intelligence bosses highlighted the unique challenges posed by technology, especially in the context of the conflict in Ukraine, which demonstrated the combination of various technologies at a rapid pace. They also emphasized efforts to disrupt disinformation campaigns and sabotage by Russia as well as the increasing focus on addressing technology theft and security issues from China.

Google Researchers Publish Paper About How AI Is Ruining the Internet

July 4, 2024, 3 p.m. • Futurism • (1 Minute Read)
In a surprising turn of events, Google researchers have published a paper warning about the detrimental impact of generative AI on the internet. This revelation is especially ironic as Google itself has been a major advocate for advancing this technology. The study reveals that the majority of generative AI users are leveraging the technology to disseminate fake or manipulated content, blurring the lines between truth and deceit. This misuse has far-reaching consequences, leading to public deception, fraudulent activities, and a distortion of socio-political reality. The researchers also express concerns about the increasing sophistication and accessibility of generative AI systems, exacerbating the proliferation of fake content. As companies like Google continue to integrate AI into their products, this issue is likely to persist and intensify, raising serious questions about the authenticity and reliability of digital information.

China has requested far more generative AI patents than any other country

July 4, 2024, 1:06 p.m. • Fortune • (3 Minute Read)
China has emerged as the leader in generative AI patents, surpassing other countries by a significant margin, according to the World Intellectual Property Organization. The U.S. is a distant second in this regard. In the past decade, China has accounted for over 38,200 generative AI inventions, six times more than the nearly 6,300 from the United States. The explosive growth of generative AI is evident as more than a quarter of these inventions emerged in the last year. While this technology has the potential to enhance efficiency and drive scientific discoveries, concerns about its impact on jobs and fair compensation for content creators have been raised. The report also highlights the evolving landscape of AI technologies, and while China leads in generative AI patents, the U.S. is at the forefront in developing cutting-edge AI systems and models.

Cloudflare debuts one-click nuke of web-scraping AI

July 3, 2024, 7:44 p.m. • The Register • (6 Minute Read)
Cloudflare has introduced a new feature enabling web hosting customers to block AI bots from scraping website content and using it without permission for training machine learning models. This move comes in response to customer concerns about dishonest AI bot visits and the unauthorized usage of website data. While the robots.txt file already offers a means to block bots, Cloudflare's new feature provides a one-click option to block all AI bots. The company's decision reflects the growing unease about AI companies using web content without consent, especially for training AI models. Cloudflare's new offering aims to provide a more robust defense against AI bots, which currently visit around 39 percent of the top one million web properties served by Cloudflare. This new tool is available to all customers and is located in the Security -> Bots menu for a given website, reflecting Cloudflare's commitment to helping content creators maintain control over their content.

Can anyone beat Nvidia in AI? Analysts say it's the wrong question

July 2, 2024, 10:30 a.m. • Fortune • (5 Minute Read)
Nvidia's dominance in the AI chip market, controlling 80% of the market, has raised questions about potential competitors. While some analysts suggest that there is no apparent threat to Nvidia at the moment, others see opportunities for smaller AI chip startups to target specific segments of the market, particularly in the area of specialized needs for AI companies. Despite competition from companies like AMD and Intel, Nvidia's established user base and software ecosystem make it difficult for rivals to displace its position. Nvidia's impending release of the Blackwell system and its strong software platform further indicate its determination to maintain its lead in the industry. However, potential challenges such as antitrust investigations and the need to address power consumption concerns could provide openings for competitors to gain traction. The article emphasizes the potential for a healthy ecosystem of chipmakers and software creators to thrive in the expanding AI market, pointing to a space for many players to coexist alongside Nvidia.

AI scientist Ray Kurzweil: 'We are going to expand intelligence a millionfold by 2045'

June 29, 2024, 5:59 p.m. • The Guardian • (6 Minute Read)
The American computer scientist and AI authority Ray Kurzweil predicts that by 2045, human intelligence will increase by a millionfold through the merger of brain and cloud computing. Kurzweil, a Google principal researcher, foresees that AI will reach human-level intelligence by 2029 and artificial general intelligence (AGI) shortly thereafter. He attributes this progress to the exponential growth in computing power and predicts that contextual memory, common sense reasoning, and social interaction in AI will improve with continued advancements. Kurzweil acknowledges the potential risks of advanced AI but emphasizes the profound advantages and the ongoing efforts by major companies to ensure the safety and alignment of AI with human values. He also discusses the potential societal and legal implications of digital immortality and offers personal insights into his health and longevity practices.

Look out, Meta Ray-Bans: These are the world's first smart glasses with GPT-4o

June 29, 2024, 12:06 p.m. • ZDNet • (4 Minute Read)
Solos has unveiled the world's first smart glasses equipped with GPT-4o, featuring generative artificial intelligence capable of analyzing visual input. The AirGo Vision smart glasses utilize AI to provide real-time information based on what the wearer sees, offering services such as recognizing people, objects, and landmarks, providing directions, and checking prices. Wearers can also capture hands-free photos to request information. The glasses feature swappable frames, with options for a front camera or no camera, and a built-in LED notification light for discreet alerts. The AirGo Vision will be available later this year, with LED-only frames set for release in July at a price of $249.99.

Emergence thinks it can crack the AI agent code

June 24, 2024, 5 p.m. • TechCrunch • (8 Minute Read)
Emergence, a new generative AI venture, has emerged from stealth with $97.2 million in funding and credit lines totaling over $100 million. Co-founded by Satya Nitta, Emergence aims to build an "agent-based" system that can handle tasks typically managed by knowledge workers. The company plans to use first- and third-party generative AI models to automate tasks such as filling out forms, searching for products, and navigating streaming services. Emergence has introduced an open-sourced orchestrator agent, which functions as an automatic model switcher for workflow automations. The company also intends to monetize the orchestrator with a premium version in the near future. Moreover, Emergence has formed strategic partnerships with Samsung and touch display company Newline Interactive to integrate its technology into future products. Despite the buzz surrounding AI agents, the differentiation of Emergence remains unclear, but the company is confident in its ability to solve fundamental AI infrastructure problems and deliver a clear and immediate ROI for enterprises. However, skepticism surrounds the ability of Emergence to outperform other players in the generative AI space.

Perplexity Is Reportedly Letting Its AI Break a Basic Rule of the Internet

June 20, 2024, 5:15 p.m. • Gizmodo • (2 Minute Read)
In a recent development, Perplexity, an AI search startup backed by Jeff Bezos, has come under scrutiny for its apparent disregard of the Robots Exclusion Protocol, a widely accepted web standard. The company is accused of bypassing operators' restrictions using an unlisted IP address to access and scrape web content that is meant to be off-limits to bots. Despite claiming to respect the protocol in its documentation, Perplexity's actions raise questions about its adherence to basic internet rules. Additionally, the startup is facing legal threats for copyright infringement after allegedly using Forbes' content without proper attribution. These actions not only challenge the integrity of internet regulations but also disrupt the business model of digital media. While Perplexity is reportedly working on partnerships to address these issues, its current practices raise concerns about the impact of AI on web traffic and content distribution.

Anthropic has a fast new AI model -- and a clever new way to interact with chatbots

June 20, 2024, 2 p.m. • The Verge • (2 Minute Read)
Anthropic, a leading AI company, has unveiled its latest model, Claude 3.5 Sonnet, boasting enhanced speed and personability. The new model, available on web and iOS, has surpassed its predecessor, outperforming competitors such as OpenAI's GPT-4o and Google's Gemini in various tasks. Anthropic also introduces Artifacts, a feature that allows users to interact with Claude's outputs, broadening the AI's capabilities beyond typical chatbot functions. With a focus on business applications, Anthropic aims to integrate Claude as a tool for companies to centralize their knowledge and ongoing work. The competitive pace of AI advancements is evident, with Anthropic's rapid progress signaling ongoing innovation in the field.

Two ways you can build custom AI assistants with GPT-4o - and one is free!

June 20, 2024, 1:53 p.m. • ZDNet • (5 Minute Read)
OpenAI's latest model, GPT-4o, offers unprecedented levels of intelligence and versatility, but using it through ChatGPT often requires detailed instructions. However, users can bypass this by creating custom AI assistants, which can efficiently execute specific tasks without extensive prompting. Building AI assistants is now accessible through two platforms: ChatGPT and You.com. ChatGPT users can easily customize their chatbots within the platform, albeit with a $20 monthly fee for ChatGPT Plus. On the other hand, You.com allows users to create custom assistants for free, using a variety of advanced AI models. With these customizable AI assistants, users can streamline repetitive tasks and save time, whether for personal or business use.

London premiere of movie with AI-generated script cancelled after backlash

June 20, 2024, 12:40 p.m. • The Guardian • (2 Minute Read)
The London premiere of a film starring Nicolas Pople and directed by Peter Luisi was cancelled after backlash for its AI-generated script. The Prince Charles Cinema, located in London’s West End, was due to host the showing of The Last Screenwriter, but announced on social media that the screening would not go ahead due to strong concerns from the audience regarding the use of AI in place of a human writer. Although the film-makers insist that the feature is a contribution to the cause, the cinema received 200 complaints leading to the cancellation. This incident reflects the ongoing debate in the film industry over the use of AI in the writing process.

AI is replacing human tasks faster than you think

June 20, 2024, 12:06 p.m. • CNN • (6 Minute Read)
In a recent survey of finance chiefs conducted by Duke University and the Federal Reserve Bank of Atlanta, it was found that nearly half of large US firms plan to use artificial intelligence (AI) within the next year to automate tasks previously done by employees. These tasks range from paying suppliers and doing invoices to crafting job posts, writing press releases, and building marketing campaigns. The survey also indicated that companies are turning to AI to cut costs, boost profits, and make their workers more productive. While some experts believe AI may not cause mass job loss in the near future, the survey raises concerns about the rapid adoption of AI and the need for strong risk management systems and redundancies as companies experiment with this technology. This rapid shift towards AI adoption is seen as a way for companies to address a variety of concerns, such as inflation and regulatory frameworks. However, it also poses significant risks as companies navigate the transition to AI.

If AI is so good, why are there still so many jobs for translators?

June 18, 2024, 10:30 a.m. • NPR • (7 Minute Read)
In a recent article, NPR's analysis reveals that despite the advancement of AI in translating foreign languages, human translators are still in high demand. Despite the significant progress in machine translation technology, human translators and interpreters continue to find job opportunities. Contrary to predictions, the number of translator and interpreter jobs is increasing, as seen in the Bureau of Labor Statistics' data. The demand for human translators and interpreters is driven by the complexity of linguistic tasks that require creativity, cultural sensitivity, and understanding of subtle nuances in meaning, qualities that AI often cannot replicate. Moreover, AI is used as a tool by human translators and interpreters to enhance productivity rather than replacing them entirely. While the integration of AI has made translation faster and cheaper, it has also raised concerns about the devaluation of translation skills due to increased competition. The wages of translators and interpreters have seen some growth according to data, but there are concerns about potential wage disparities between those who master AI and those who don't. Ultimately, the article suggests that the AI's impact on the translation industry may not be as detrimental as originally anticipated, and that the demand for human translators and interpreters is likely to persist due to the unique and irreplaceable skills they bring to the industry.

How's this for a bombshell - the US must make AI its next Manhattan Project

June 15, 2024, 6:58 p.m. • The Guardian • (3 Minute Read)
The Observer has reported on a new essay by a young German philosopher, Leopold Aschenbrenner, that argues for the urgent need for the United States to make AI its next Manhattan Project. Aschenbrenner predicts that superintelligence is approaching, with AGI expected as early as 2027, and that the world is unprepared for it. He suggests that the US must rapidly lock down AI labs and lead AI research to ensure national security. Aschenbrenner believes the US has a lead in AI and must maintain it before other countries catch up, drawing a parallel between AI development and the nuclear arms race. He advocates for an "AGI-industrial complex" and a new Manhattan Project to accomplish this goal. This essay has sparked discussions about the potential impact and implications of superintelligent machines and the race for AI dominance.

Here's everything Apple announced at the WWDC 2024 keynote, including Apple Intelligence, Siri makeover

June 11, 2024, 1 p.m. • TechCrunch • (10 Minute Read)
At the WWDC 2024 keynote, Apple unveiled several major announcements, including the introduction of Apple Intelligence and a significant makeover for its smart AI assistant, Siri. The company emphasized its artificial intelligence ambitions, with key updates across Siri, Genmoji, ChatGPT integration, photo editing, and more. Apple also revealed plans to collaborate with third-party AI models beyond OpenAI, including Google’s Gemini model. Additionally, the availability of Apple Intelligence will be limited to specific newer devices, and the features encompass various aspects of personalized and immersive experiences. The event also showcased software and hardware updates, such as the macOS Sequoia operating system, an enhanced Photos app, and a new "Smart Script" feature for the Apple Pencil on iPad. Other notable developments include the introduction of Tap to Cash for iPhone payments and the launch of iOS 18, which will offer increased customization and support for generative AI.

Read ChatGPT's take on Leopold Aschenbrenner's AI essay

June 9, 2024, 6:42 p.m. • Business Insider • (5 Minute Read)
In the news, there's a significant debate over fired researcher Leopold Aschenbrenner's 165-page essay on the future of AI, published after his dismissal from OpenAI. ChatGPT, an AI model, was utilized to summarize Aschenbrenner's essay, distilling it into a few crucial takeaways. Aschenbrenner's work addresses the rapid progress of AI, the associated economic and security implications, technical and ethical challenges, and societal impact. Notably, he predicts the arrival of artificial general intelligence (AGI) by 2027 and emphasizes the necessity for careful consideration of the societal and economic transformations that AI will bring. Additionally, he suggests significant involvement by the US government in AI development by 2027-2028.

AI vs humans: Why soft skills are your secret weapon

June 9, 2024, 6:05 p.m. • VentureBeat • (5 Minute Read)
In the news story titled "AI vs humans: Why soft skills are your secret weapon," Marina Minnikova highlights the increasing role of AI in our lives, emphasizing that while AI can handle numerous tasks efficiently, there are essential soft skills that remain exclusive to humans. The article underscores that skills like creativity, leadership, interpersonal communication, emotional intelligence, and ethical decision-making are crucial strengths. It suggests that developing these skills is imperative for staying ahead in the AI era. Minnikova asserts that instead of competing with AI in tasks it excels at, focusing on nurturing uniquely human traits and soft skills will be the key to success in the rapidly evolving technological landscape.

Google's and Microsoft's AI Chatbots Refuse to Say Who Won the 2020 US Election

June 7, 2024, 1:59 p.m. • WIRED • (4 Minute Read)
In a peculiar turn of events, Google's and Microsoft's AI chatbots, Gemini and Copilot, are refusing to disclose the winner of the 2020 US presidential election. Despite their advanced programming, both chatbots are unable to provide a definitive answer when asked who won the election. Their inability extends to refusing to provide results for any election globally, past or present. This unexpected behavior comes at a crucial time, with the 2024 US presidential election on the horizon. These chatbots' reluctance to address election-related queries raises concerns about their accuracy and reliability as sources of information. Furthermore, this issue is exacerbated by the persistence of baseless election conspiracies propagated by some Americans. Officials from Google and Microsoft have acknowledged the chatbots' limitations, attributing them to the companies' efforts to redirect election-related queries to their respective search engines. This episode sheds light on the challenges and limitations of AI technology in addressing sensitive and complex topics, especially in the politically charged landscape of elections.