Latest news headlines about artificial intelligence
The identity and access management (IAM) startup IndyKite of San Francisco has introduced a new approach to addressing the trust problem in generative AI. By applying cybersecurity techniques to vet sources of data, the software aims to ensure the trustworthiness of leveraged data in any business or analytics model. This system, leveraging popular Neo4j graph database management software, aims to verify the origins of data before it is used to train programs, potentially addressing issues related to biases and data drift in generative AI. With $10.5 million in seed financing from Molten Ventures, Alliance Ventures, and SpeedInvest, IndyKite's efforts to enhance trust and accuracy in Gen AI are gaining attention in the cybersecurity and artificial intelligence fields.
Microsoft’s AI vision of the future, Copilot Pro, was recently launched as a $20 monthly subscription, providing access to AI features in Office apps and offering improved image generation tools. A senior editor at The Verge recently tested Copilot Pro to assess its value. The improved image creation tool, Designer, utilizing OpenAI’s DALL-E 3 model, impressed with its ability to generate detailed and realistic images. The AI capabilities inside Office apps were found to be useful for text generation and data visualization in Word, PowerPoint, Excel, and Outlook, although the effectiveness varied. While some features were impressive, the editor questioned whether the $20 monthly subscription fee was justified, given the availability of similar free tools and the scope for further improvements to Copilot Pro in the future.
Google has released Gemma 2B and 7B, two smaller open-source AI models designed for language tasks in English. These models offer developers more freedom to use the research that went into Google's flagship Gemini, which competes with OpenAI's ChatGPT. Despite being smaller, Gemma models reportedly "surpass significantly larger models on key benchmarks" and can run directly on a developer's laptop or desktop computer. Google's decision to make Gemma open source contrasts with the closed nature of Gemini, allowing for wider experimentation with the company's AI. The release of Gemma also comes with a "responsible AI toolkit" to help developers manage the models. While currently optimized for English language tasks, Google aims to expand Gemma's capabilities to address market needs in other languages. Developers can access Gemma for free on Kaggle and receive up to $300 in credits for first-time Google Cloud users. Other AI companies, such as Meta, have also released lighter-weight versions of their flagship models, reflecting a broader trend in the industry.
A new wave of AI agents is making headlines in Silicon Valley, offering to carry out real-world tasks for users. These AI-powered assistants can now book vacations, order food through DoorDash, and even call an Uber. Rabbit Inc. is one such company that has developed the Rabbit R1, a device the size of an iPhone that responds to voice commands and aims to fulfill various tasks using AI technology. The company claims it has received over 80,000 preorders for the Rabbit R1, which will be shipped in the coming months. However, the rise of AI agents has raised concerns about privacy and potential misuse, as researchers warn about the risks of AI-powered technology going rogue. Despite the excitement in Silicon Valley over these AI assistants, the need for separate hardware devices has sparked debates about their necessity in a world where smartphones are becoming increasingly advanced in harnessing AI technology.
Groq, an AI chip company, has gained attention for demonstrating lightning-fast AI performance, surpassing internet personality Elon Musk’s AI application called Grok. As shown in viral demos, Groq claims to provide the world’s fastest large language models, outpacing current versions of ChatGPT, Gemini, and Grok. The company's Language Processing Units (LPUs) have been reported to be faster than Nvidia’s Graphics Processing Units (GPUs), previously considered the industry standard for running AI models. This breakthrough in speed could potentially enhance the practical use of AI chatbots by enabling real-time, human-like conversations and interactions. While Groq's advancements have garnered significant interest, the scalability and broader impact of its AI chips remain to be seen in comparison to existing technologies.
ChatGPT, a language model, is making waves in the scientific community by enabling scientists to produce review articles. However, Melissa Kacena, vice chair of orthopaedic surgery at Indiana University School of Medicine, has found that up to 70% of the cited references in articles written by ChatGPT were inaccurate, and the AI-generated content was also more likely to be plagiarized. Despite these challenges, ChatGPT shows promise in its ability to process data efficiently and provide better grammar than human writers. With proper programming and training, it could potentially become a valuable tool for researchers. Kacena emphasizes the need to use AI in an ethical and scientifically sound manner and believes that with more input and fixes, ChatGPT could assist researchers in streamlining the writing process and gaining scientific insights.
In a recent development, scientists have been exploring the fusion of large language models (LLMs) such as ChatGPT with robot bodies, aiming to overcome the limitations of traditional robotic programming. This integration, however, poses significant challenges and ethical concerns. The use of LLMs offers robots access to extensive knowledge and enables them to communicate in natural language. Yet, the practical application of this technology raises questions about the potential risks and limitations. While some researchers are excited about the possibilities for a leap forward in robot understanding, others are cautious, citing occasional errors, biased language, and privacy violations associated with LLMs. Despite the remarkable capabilities of LLMs, concerns persist about their reliability and potential implications in real-world scenarios. The ongoing debate underscores the need for careful consideration of the integration of LLMs into robot bodies.
SoftBank Group’s Founder, Masayoshi Son, is seeking $100 billion to establish a new AI chip venture, reports Bloomberg. The venture, codenamed Izanagi, aims to rival Nvidia in the AI chips market and will collaborate with Arm, a chip design company partially owned by SoftBank. The fundraising strategy involves seeking $70 billion from Middle East-based institutional investors, with SoftBank contributing the remaining $30 billion. This initiative reflects SoftBank's pivot towards AI following its historical success with Alibaba and aims to capitalize on the rising demand for AI processors. Additionally, OpenAI's Sam Altman is in talks with investors in the UAE for a separate AI chip project. These developments signal a significant shift in SoftBank's investment focus and its potential impact in the AI industry.
Google’s AI Boss, Demis Hassabis, recently spoke with WIRED about the future of AI, expressing the belief that scaling computer power and data is not the only path to unlocking artificial general intelligence (AGI). Hassabis emphasized the need for new innovations and advancements in AI beyond just increasing scale. While acknowledging the importance of scale, he highlighted that fundamental research and senior research scientists are also crucial to AI development. He also discussed the development of Gemini Pro 1.5, a new AI model that can handle vast amounts of data and the potential shift towards AI systems with planning and agent-like capabilities. Hassabis also stressed the need for meticulous safety measures as AI becomes more powerful and active. The conversation with WIRED shed light on Google's approach to AI and the ongoing efforts to advance the field beyond simply scaling existing techniques.
The integration of generative AI in physical industries is expected to bring about significant benefits, particularly in sectors such as transportation, logistics, construction, energy, and field service. Unlike discriminative AI models, which rely on existing data for predictions, generative AI has the capability to synthesize entirely new data sets, making it indispensable for training AI models in scenarios where sourcing real data is dangerous, difficult, or sparse. This approach holds tremendous potential in addressing critical challenges, such as safety in the workplace, by enabling the creation of synthetic data sets for diverse and challenging use cases. Generative AI is poised to transform the physical economy, with the potential to mitigate the impact of natural disasters, combat climate change, and enhance operational efficiencies. Key factors for leveraging generative AI in physical businesses include investing in a highly skilled team and ensuring data quality, while also focusing on translating insights and capabilities into meaningful results for businesses and their clientele. Jairam Ranganathan, a leader in product management, design, data science, and strategy for Motive, emphasized the transformative potential of generative AI in reshaping the industries that fuel everyday lives.
The news story discusses the limitations of deep learning in achieving artificial general intelligence (AGI). It highlights the challenges of using deep learning, which relies on prediction and has difficulty handling unpredictable real-world scenarios. The article proposes the use of decision-making under deep uncertainty (DMDU) methods, such as Robust Decision-Making, as a potential framework for realizing AGI reasoning. It emphasizes the need to pivot towards decision-driven AI methods that can effectively handle uncertainties in the real world. The authors, Swaptik Chowdhury and Steven Popper, advocate for a departure from the deep learning paradigm and emphasize the importance of decision context in advancing towards AGI.
OpenAI is testing a memory feature for ChatGPT, allowing it to recall details from past interactions to streamline future conversations. This feature, currently available to a limited number of users, is designed to enhance personalization and efficiency, with robust controls for privacy, including the ability to manage, delete, or disable memory. Additionally, OpenAI is extending memory capabilities to its GPT models, ensuring tailored interactions across various applications. This development promises significant improvements in user experience, emphasizing control and privacy, and offers particular benefits for Enterprise and Team users by adapting to their specific preferences and work styles.
Microsoft-backed OpenAI has completed a deal that values the artificial intelligence company at $80 billion, as reported by the New York Times. The deal involves selling existing shares in a tender offer led by venture firm Thrive Capital. This move allows employees to cash out their shares rather than raising money through a traditional funding round. OpenAI, which received a $10 billion investment from Microsoft in January 2023, has attracted significant funding rounds and has been making strides in AI technology. CEO Sam Altman has been in talks to acquire a chip builder to enhance the company's access to AI chips. However, the company's large investments have drawn regulatory attention, with European Commission and Federal Trade Commission officials investigating potential antitrust concerns.
OpenAI's attempt to trademark the term GPT, which stands for generative pre-trained transformer, has been denied by the US Patent and Trademark Office. The office stated that GPT is too general of a term to be claimed as a trademark, and allowing its registration could prevent competitors from accurately describing their own products as GPT. Despite OpenAI's argument that GPT is not a descriptive word, the office emphasized that those familiar with the technology understand GPT refers to a general type of software, not just OpenAI products. This is not the first time OpenAI's trademark claim for GPT has been rejected, and the company can appeal the decision to the Trademark Trial and Appeal Board for another opportunity to obtain the trademark.
OpenAI has introduced Sora, a text-to-video generator that uses generative artificial intelligence to create short videos based on written commands. While not the first of its kind, industry analysts have praised the high quality and significant leap in text-to-video generation displayed by Sora. The tool can create up to 60-second long videos and is capable of generating videos from existing still images. Despite the impressive capabilities demonstrated so far, there are concerns about potential ethical and societal implications, especially regarding the spread of misinformation and fraudulent content. Although not yet available to the public, OpenAI is engaging with policymakers and artists to address these issues before releasing the tool officially.
The latest development in generative AI technology has been unveiled by OpenAI, prompting concerns about unregulated and consequence-free advancements. The program, named Sora, is capable of producing photorealistic media from simple text inputs, raising issues of legality, privacy, and objective reality. OpenAI's decision to grant limited access to Sora's capabilities without providing technical details has prompted concerns about the lack of regulatory oversight. As the company aims to achieve Artificial General Intelligence, the potential implications of Sora's development on the online landscape are vast and warrant attention from policymakers and industry watchdogs to address potential risks and harms.
OpenAI has revealed a new AI tool named Sora that can instantly generate videos from a single line of text. The tool, inspired by the Japanese word for "sky," has the ability to accurately interpret props and create characters with vibrant emotions. OpenAI showcased a variety of videos created by Sora, including a lifelike woman walking down a rainy Tokyo street and woolly mammoths treading through a snowy meadow. However, there are concerns about the tool's potential misuse, with experts raising issues about copyright, privacy, and the potential for trickery and manipulation. OpenAI has stated that it is engaging with artists, policymakers, and experts to ensure the safety of the new tool before its public release. Despite extensive research and testing, the company acknowledges that it cannot predict all the potential beneficial and abusive uses of the technology. This follows a lawsuit from The New York Times alleging that OpenAI and its investor Microsoft unlawfully used the newspaper's articles to train another AI model, ChatGPT, which now competes with the newspaper's service. OpenAI also announced the termination of accounts belonging to state-affiliated groups that were reportedly using its AI models for hacking purposes.
The new OpenAI tool, Sora, has generated 8 wild and fascinating videos using artificial intelligence. These videos showcase the ability of AI to create visually stunning and imaginative content. OpenAI CEO Sam Altman invited users to suggest captions for videos they'd like to see, and the resulting videos from Sora are both mesmerizing and thought-provoking. This development marks an exciting new chapter in the use of AI for creative purposes, expanding beyond just text and images. While there are understandable concerns about the rapid advancement of generative AI, Sora's videos represent a significant leap in this technology. The potential for AI video generation appears to be limitless, and the impact of this advancement on various industries, including entertainment and media, is yet to be fully realized.
OpenAI, the company behind ChatGPT, is reportedly venturing into the web search segment. With significant support from Microsoft, OpenAI is believed to be developing a web search product, possibly in collaboration with Bing, to directly compete with Google. This move signals OpenAI’s ambition to redefine search with a more conversational, context-aware approach, leveraging Bing's search capabilities. While the specifics of this tool remain unclear, OpenAI's substantial investment from Microsoft over the years could provide a solid foundation for challenging established search engines. However, whether OpenAI's foray into search will significantly shift user preferences away from traditional options like Google remains to be seen.
OpenAI and Microsoft have collaborated to disrupt state-backed hackers using ChatGPT, a popular AI chatbot created by OpenAI. The company announced that it had terminated accounts associated with five different state-linked groups from China, Iran, North Korea, and Russia, which were using AI services to carry out research, translation, find coding errors, and run basic coding tasks to support their malicious activities. The companies highlighted the unique risks posed by state-affiliated groups in the digital ecosystem and human welfare and emphasized their commitment to building AI tools for the betterment of society while combating malicious activities. This development comes amid growing concerns about the use of generative AI applications for cyber attacks and phishing campaigns, with the UK's National Cyber Security Centre warning about the use of AI tools by amateur and low-skilled hackers for nefarious purposes.