Latest news headlines about artificial intelligence
The identity and access management (IAM) startup IndyKite of San Francisco has introduced a new approach to addressing the trust problem in generative AI. By applying cybersecurity techniques to vet sources of data, the software aims to ensure the trustworthiness of leveraged data in any business or analytics model. This system, leveraging popular Neo4j graph database management software, aims to verify the origins of data before it is used to train programs, potentially addressing issues related to biases and data drift in generative AI. With $10.5 million in seed financing from Molten Ventures, Alliance Ventures, and SpeedInvest, IndyKite's efforts to enhance trust and accuracy in Gen AI are gaining attention in the cybersecurity and artificial intelligence fields.
In a recent development, scientists have been exploring the fusion of large language models (LLMs) such as ChatGPT with robot bodies, aiming to overcome the limitations of traditional robotic programming. This integration, however, poses significant challenges and ethical concerns. The use of LLMs offers robots access to extensive knowledge and enables them to communicate in natural language. Yet, the practical application of this technology raises questions about the potential risks and limitations. While some researchers are excited about the possibilities for a leap forward in robot understanding, others are cautious, citing occasional errors, biased language, and privacy violations associated with LLMs. Despite the remarkable capabilities of LLMs, concerns persist about their reliability and potential implications in real-world scenarios. The ongoing debate underscores the need for careful consideration of the integration of LLMs into robot bodies.
OpenAI's attempt to trademark the term GPT, which stands for generative pre-trained transformer, has been denied by the US Patent and Trademark Office. The office stated that GPT is too general of a term to be claimed as a trademark, and allowing its registration could prevent competitors from accurately describing their own products as GPT. Despite OpenAI's argument that GPT is not a descriptive word, the office emphasized that those familiar with the technology understand GPT refers to a general type of software, not just OpenAI products. This is not the first time OpenAI's trademark claim for GPT has been rejected, and the company can appeal the decision to the Trademark Trial and Appeal Board for another opportunity to obtain the trademark.
OpenAI, in partnership with Microsoft Threat Intelligence, has disrupted the operations of five state-affiliated threat actors using its services for malicious cyber activities. These actors, linked to China, Iran, North Korea, and Russia, engaged in activities such as espionage, code debugging, and phishing campaign preparations. Despite these challenges, OpenAI asserts that its AI models, like GPT-4, offer limited advantage for such tasks over non-AI tools. Responding with a comprehensive safety strategy, OpenAI emphasizes collaboration, technology investment, and public transparency to mitigate misuse while advancing the responsible use of AI technology across the digital ecosystem.
Apple's generative AI strategy is gaining attention due to its significant contributions to the field, which have been largely unnoticed. Amidst the buzz surrounding other tech companies, Apple has released papers, models, and programming libraries that signal its growing influence in on-device generative AI. Leveraging its vertical integration and control over its hardware and software stack, Apple is focusing on optimizing generative models for on-device inference. The company's efforts include the release of open-source models, such as Ferret and MLLM-Guided Image Editing, and the creation of the MLX library for machine learning developers. These developments suggest that Apple is positioning itself as a key player in the future of on-device generative AI, setting the stage for potential advancements in its devices and applications.
OpenAI and Microsoft have collaborated to disrupt state-backed hackers using ChatGPT, a popular AI chatbot created by OpenAI. The company announced that it had terminated accounts associated with five different state-linked groups from China, Iran, North Korea, and Russia, which were using AI services to carry out research, translation, find coding errors, and run basic coding tasks to support their malicious activities. The companies highlighted the unique risks posed by state-affiliated groups in the digital ecosystem and human welfare and emphasized their commitment to building AI tools for the betterment of society while combating malicious activities. This development comes amid growing concerns about the use of generative AI applications for cyber attacks and phishing campaigns, with the UK's National Cyber Security Centre warning about the use of AI tools by amateur and low-skilled hackers for nefarious purposes.
Roman Rember, discusses the emergence of Generative Artificial Intelligence (GenAI) as a subset that goes beyond traditional AI capabilities. While AI excels in specific tasks like data analysis and pattern prediction, GenAI acts as a creative artist by generating new content such as images, designs, and music. The article highlights the potential impact of GenAI on various industries and the workforce, citing a McKinsey report that anticipates up to 29.5% of work hours in the U.S. economy being automated by AI, including GenAI, by 2030. However, the integration of GenAI into teams poses unique challenges, such as potential declines in productivity and resistance to collaboration with AI agents. The article emphasizes the need for collaborative efforts between HR professionals and organizational leaders to address these challenges and establish common practices for successful integration. It also underscores the importance of robust learning programs and a culture emphasizing teaching and learning to harness the potential of GenAI for growth and innovation. The article provides a comprehensive overview of GenAI and its implications, aiming to inform and prepare organizations and individuals for the transformative power of this technology.
In a recent survey, it was found that 1 in 3 men ages 18 to 34 in the US are using ChatGPT for relationship advice, compared with 14% of women in the same age range. Dmitri Mirakyan, the founder of YourMove.AI, shared that a large number of users are also introverts, highlighting the demand for AI assistance in online dating. The AI dating tool, YourMove, offers a range of services such as drafting messages, analyzing conversations, and evaluating users' dating app profiles. Similarly, Rizz, an AI dating assistant, has gained 3.5 million downloads to date, with 1 million monthly active users. While AI-powered dating has shown great potential, concerns have been raised about the authenticity and ethics of these interactions, leading to a critical question of whether users are presenting a genuine version of themselves. Renate Nyborg, former CEO of Tinder, has launched Meeno as an AI advice tool for various types of relationships, aiming to help users build genuine connections beyond just chat-up lines. These developments in AI technology are reshaping the landscape of online dating, provoking discussions on the implications of relying on AI for relationship interactions.
Perplexity AI is a cutting-edge chatbot-style search engine, showcasing a unique fusion of conversational AI and web search capabilities. This innovative platform allows users to quickly obtain answers to their queries, supported by relevant sources and citations from large language models such as OpenAI and Meta's Llama. Unlike its counterpart, ChatGPT, which focuses solely on conversational interactions, Perplexity AI integrates web search functionality, enabling users to access information directly within the conversational interface. Notably, the platform's Natural Language Understanding ensures credible, source-linked responses to user queries, and its advanced GPT-4 model offers enhanced AI capabilities. Potential applications include assisting students in efficient information retrieval and aiding journalists in verifying information for accurate reporting.
In a groundbreaking development, generative AI models have outpaced Moore's Law, with products from major tech firms like Google, OpenAI, Amazon, and Microsoft reaching their second, third, and even fourth versions within just a year. Google recently launched Gemini Advanced, the latest version of its AI chatbot, showcasing the rapid evolution in the AI landscape. These advancements are driven by continuous research and development, leading to improvements in model architecture, training methodologies, and application-specific enhancements. The rise of generative AI tools has sparked a multi-modal movement, enabling AI systems to process visual, verbal, audio, and textual data simultaneously. This rapid progress underscores the transformative potential of AI technology, with major implications for industries ranging from finance to healthcare.
Ghosts in Google Gemini, OpenAI GPT-4: Experts believe AI Models more sentient than the studios let on
The news article discusses the concerns of AI experts regarding the potential sentience of Google's Gemini AI and OpenAI's GPT-4 chatbots. Despite the companies' assertions, there are growing concerns about the human-like qualities and eerie responses of these AI models. Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, has expressed unease about the chatbot's capabilities, comparing its responses to encounters with a ghostly presence. Furthermore, Microsoft researchers have highlighted the AI's ability to understand emotions, engage in reasoning, and explain itself, prompting discussions about the parameters of 'human-level intelligence.' The concept of AI sentience has attracted attention from organizations advocating for granting moral consideration to AI models, as there is a growing belief in the emergence of machine sentience, challenging the evolving relationship between humans and artificial intelligence.
In the comparison between Microsoft Copilot Pro and OpenAI's ChatGPT Plus, different features and advantages are highlighted to help potential subscribers choose between the two $20 per month subscriptions. ChatGPT Plus offers a wider range of file types for analysis, access to a GPT Store with custom GPTs, and the ability to create custom GPTs. On the other hand, Copilot Pro integrates with Microsoft 365 apps, provides faster AI image creation, visually enhanced information, and is accessible directly from Windows. The article emphasizes the specific advantages of each option, empowering readers to make an informed decision based on their individual needs.
Singapore-based startup Brilliant Labs has unveiled its latest product, Frame, a pair of lightweight AR glasses featuring a multimodal AI assistant named Noa. The glasses have captured the attention and investment of John Hanke, CEO of Niantic, the company behind Pokemon Go. Noa is capable of visual processing, image generation, translation, and speech recognition. Frame, which will retail at $349, is open source like its predecessor and will start shipping in April. With a total financing of $6 million, including an investment from Hanke, Brilliant Labs aims to revolutionize human/AI interaction through innovative wearable devices.
UC Berkeley Researchers Introduce SERL: A Software Suite for Sample-Efficient Robotic Reinforcement Learning
UC Berkeley researchers have developed SERL, a software suite aiming to make robotic reinforcement learning (RL) more accessible and efficient. The suite includes a sample-efficient off-policy deep RL method, tools for reward computation and environment resetting, as well as a high-quality controller tailored for widely adopted robots and challenging example tasks. The researchers' evaluation demonstrated that the learned RL policies significantly outperformed BC policies for various tasks, achieving efficient learning and obtaining policies within 25 to 50 minutes on average. The suite's release is expected to contribute to the advancement of robotic RL by providing a transparent view of its design and showcasing compelling experimental results.
In a bid to address inequities in cancer care, medical experts in San Francisco are utilizing artificial intelligence tools, such as OpenAI’s GPT series, to streamline care and better understand patient needs. Dr. Atul Butte, a pediatrician at UCSF, emphasizes the significance of AI tools in identifying the necessary drugs and treatment for various cancer variations and mutations. Additionally, patients have access to tools like ChatGPT and their own medical information, allowing them to identify specific types of treatments for their cancers. Dr. Isaac Kohane from Harvard Medical School is also using AI to aid in cancer diagnosis, particularly for rarer cancers, aiming to enhance data to diagnose patients faster. Despite concerns about biases in AI, researchers believe it has the potential to provide a larger portion of the population with access to quality clinical advice.
Hugging Face, a New York City-based startup known for its popular open source AI repository, has launched third-party customizable Hugging Chat Assistants, presenting competition to OpenAI's custom GPTs. The new offering allows users to easily create their own AI chatbots with specific capabilities, similar to OpenAI's custom GPT Builder. Unlike OpenAI's paid subscription model, Hugging Chat Assistant is free and allows users to choose from several open-source large language models to power their AI assistants. While users praise the customizability and free nature of Hugging Chat Assistant, they note that it lacks certain features present in GPTs. This release demonstrates the open-source community's rapid progress in competing with closed rivals such as OpenAI.
Amazon has unveiled a new AI shopping assistant called Rufus, which is designed to guide users through their purchasing decisions on the Amazon platform. The chatbot is trained on Amazon's product library, customer reviews, and web information, enabling it to answer questions about products, make comparisons, offer suggestions, and more. Rufus is currently in beta and is available to select customers, with plans to roll out to more users in the coming weeks. Users can interact with Rufus by launching Amazon's mobile app and asking questions through the search bar. The chatbot aims to provide personalized assistance to users as they navigate the wide array of products available on Amazon.
In a recent study published in Science, a team of cognitive and computer scientists successfully trained an artificial intelligence (AI) model to match images to words using just 61 hours of naturalistic footage and sound captured from the perspective of a child named Sam. This research challenges the traditional belief that humans are born with built-in expectations and logical constraints that enable language acquisition. The findings suggest that the process of language acquisition might be simpler than previously thought, as demonstrated by the AI's ability to recognize and understand words from minimal data. The study also indicates that machines can learn similarly to humans and may not need as much input as currently used in AI models. While the AI model showed promising results in recognizing and matching words with corresponding images, the study also highlights important limitations and the need for further research in understanding the complexity of human language learning. This research has the potential to deepen our understanding of human cognition and improve education, illustrating the broader implications of AI research beyond corporate profit.
OpenAI released a study on GPT-4’s potential to aid in bioweapon creation, finding that while it could provide a slight uplift in accuracy and completeness, the impact was not statistically significant. The study involved 50 biology experts and 50 university students split into control and treatment groups, simulating bioweapon creation scenarios using GPT-4. Although the model showed marginal improvement in accuracy and completeness for both groups, OpenAI emphasized that information access alone is insufficient to create a biological threat. The company calls for further research and deliberation, aiming to alleviate concerns about AI’s role in bioterrorism.
Google has confirmed that its next-generation artificial intelligence chatbot, Bard Advanced, will be a subscription service. The CEO, Sundar Pichai, revealed this news, marking Google's entry into the market for premium chatbots. Bard Advanced will be powered by Google Gemini Ultra, the most advanced AI model, promising to outperform its competitors in various areas and enable insightful conversations across a wide range of topics. While the pricing is yet to be announced, it is expected to be in line with other AI chatbot subscription services, likely falling between $10 and $20 per month. The launch of Bard Advanced signals Google's move towards commercialized AI and its wider subscription strategy.