

SecurityWeek
Critical Vulnerability Found in Ray AI Framework
A critical issue in open source AI framework Ray could provide attackers with operating system access to all nodes.
SecurityWeek
A critical issue in open source AI framework Ray could provide attackers with operating system access to all nodes.
The Record
British and U.S. cybersecurity authorities published guidance on Monday about how to develop artificial intelligence systems in a way that will minimize the risks they face from mischief-makers through to state-sponsored hackers.
Security Affairs
A new round of the weekly SecurityAffairs newsletter arrived! Every week the best security articles from Security Affairs are free for you.
CyberNews
X owner Elon Musk has had a change of heart about the platform’s new headline policy after his own recent post didn’t make sense.
CyberNews
Ahead of OpenAI CEO Sam Altman’s firing, staff researchers sent the Board a letter warning of a powerful artificial intelligence discovery that could threaten humanity.
CyberNews
Elon Musk is sent an anonymous letter dissing OpenAI’s Sam Altman - allegedly written by former OpenAI employees - and released hours before Altman is reinstated as CEO.
SecurityWeek
OpenAI reached an agreement for Sam Altman to return to OpenAI as CEO with a new initial board of directors, after he was fired a week prior.
The Hacker News
AI Solutions Are the New Shadow IT - Ambitious Employees Tout New AI Tools, Ignore Serious SaaS Security Risks
CyberNews
Free ChatGPT users can now chat with OpenAI’s chatbot in voice messages.
CyberNews
Sam Altman to return as CEO of OpenAI.
CyberNews
Sam Altman and OpenAI's board have opened up discussions to bring back the former CEO and founder of the AI startup, while investors seek legal action.
Ars Technica
OpenAI's future hangs in the balance as staff says they'll join former CEO at Microsoft.
CyberNews
The recruitment drive has begun. Marc Benioff, CEO of software company Salesforce, has said that he will match the salary of any researcher who resigns from OpenAI.
Ars Technica
Ilya Sutskever announces regret; 505 OpenAI employees sign letter asking board to resign.
CyberNews
After OpenAI's board confirmed that Sam Altman would not return as the firm’s CEO, most of its employees said they would resign en masse if the decision wasn't reversed.
CyberNews
Anonymous Sudan attacks on OpenAI and Cloudlfare are meant to show the groups' capabilities.
SecurityWeek
Microsoft hired ex-Open AI chief Sam Altman and another architect of OpenAI for a new venture after their sudden departures.
CyberNews
Sam Altman, the ousted CEO of ChatGPT creator OpenAI, will definitely not return to the company he co-founded. It’s time to ask what happened.
CyberNews
OpenAI has appointed ex-Twitch boss Emmett Shear to lead the startup, replacing Sam Altman who will join the company's top backer Microsoft to lead a new advanced AI research team, the CEO of the software giant said.
Security Affairs
A new round of the weekly SecurityAffairs newsletter arrived! Every week the best security articles from Security Affairs are free for you.
Ars Technica
Cleared of malfeasance, Altman's unpopular firing may be undone—if he's interested.
Ars Technica
Microsoft CEO Nadella "furious"; OpenAI President and three senior researchers resign.
SecurityWeek
Open AI fired CEO Sam Altman, Mira Murati, OpenAI’s chief technology officer, will take over as interim CEO effective immediately.
Ars Technica
After Altman firing, Microsoft has "utmost confidence" in partner OpenAI.
Security Affairs
OpenAI fired its CEO Sam Altman, and the Chief technology officer Mira Murati appointed interim CEO to lead the company.
CyberNews
OpenAI has announced that its CEO Sam Altman is leaving the company after board members determined he was no longer fit for the role.
Ars Technica
Cambridge: "When an artificial intelligence hallucinates, it produces false information."
SecurityWeek
Bug hunters uncover over a dozen exploitable vulnerabilities in tools used to build chatbots and other types of AI/ML models.
Ars Technica
Designer: "I think I need to go lie down."
Ars Technica
"We observe the sophisticated Homo sapiens engaging in the ritual of hydration."
Bleeping Computer
DDoS attacks are increasingly taking down even the largest tech companies. Learn more Specops Software on these types of attacks and how you can protect your devices from being recruited into botnets.
Ars Technica
AI image synthesis is getting more capable at executing ideas, and it's not slowing down.
Ars Technica
Amid GPU shortages, Microsoft reaches for custom silicon to run its AI language models.
Ars Technica
Microsoft: "Soon there will be a Copilot for everyone and for everything you do."
SecurityWeek
The rise of AI-powered disinformation presents an immense challenge to society’s ability to discern fact from fiction.
SecurityWeek
Google files a lawsuit against cybercriminals who delivered account-hijacking malware by offering fake Bard AI downloads.
Trend Micro
This blog entry explores the effectiveness of ChatGPT's safety measures, the potential for AI technologies to be misused by criminal actors, and the limitations of current AI models.
Ars Technica
The H200 will likely power the next generation of AI chatbots and art generators.
CyberNews
The AI company has announced a search for partnerships with organizations to produce public and private datasets for training AI models.
Cyber Security News
Threat actors can ChatGPT to generate convincing phishing emails or deceptive content that encourages users to download malware.
Infosecurity News
OpenAI has admitted DDoS attacks are the cause of intermittent ChatGPT outages since November 8
Bleeping Computer
During the last 24 hours, OpenAI has been addressing what it describes as "periodic outages" linked to DDoS attacks affecting its API and ChatGPT services.
The Record
A little-noticed provision of the Biden administration’s recently issued executive order on artificial intelligence could lead to important reforms of the federal government’s data collection practices, experts say.
Infosecurity News
This integration reduces reliance on OpenAI’s API while streamlining the tool’s functionality
Bleeping Computer
OpenAI's AI-powered ChatGPT large language model-based chatbot is down because of a major ongoing outage that also took down the company's Application Programming Interface (API).
Bleeping Computer
OpenAI's AI-powered ChatGPT large language model-based chatbot is down because of a major ongoing outage that also took down the company's Application Programming Interface (API).
Bleeping Computer
During its inaugural developer conference, OpenAI unveiled GPTs, short for Generative Pre-trained Transformers. These custom versions of ChatGPT are designed to be shaped by and for individual users, whether for recreational or professional use, and can be shared with others.
The Hacker News
Get the full story on the dangers of the rapidly growing consumer application, ChatGPT, and learn how to resist cyber crime.
Ars Technica
Novel-sized context window, DALL-E 3 API, more announced on OpenAI DevDay 2023.
Ars Technica
Users can build and share custom-defined roles—from math mentor to sticker designer.
Infosecurity News
The UK Frontier AI Taskforce is evolving to become the UK AI Safety Institute
Infosecurity News
Analyst warns that risks of using the technology will become apparent
Computerworld
After little more than a year on the job, Cisco CIO Fletcher Previn can already see that AI will create productivity and efficiency gains well worth the money spent on developing domain-specific models to address internal and external business plans.
Ars Technica
"Bletchley Declaration" sums up first day of UK's international AI Safety Summit.
SecurityWeek
The AI Safety Summit focused on cutting-edge “frontier” AI that some scientists warn could pose a risk to humanity’s very existence.
Infosecurity News
The 28 signatories of the Bletchley Declaration agreed on an international network of scientific research on ‘frontier AI’ safety
SecurityWeek
Many people are raising the alarm about AI’s as-yet-unknown dangers and calling for safeguards to protect people from its existential threats.
Ars Technica
Order details US admin's approach to AI safety, media authenticity, job loss, and more.
SecurityWeek
Joe Biden's executive order on artificial intelligence (AI) will require industry to develop safety and security standards, add consumer protections and give federal agencies an extensive to-do list.
Computerworld
New tools that can corrupt digitized artwork and other copyrighted materials are emerging to thwart generative AI models that scrape the internet to learn and provide content.
Ars Technica
Long mobile conversations with the AI assistant using AirPods echo the sci-fi film.
Infosecurity News
Experts highlighted the ways generative AI tools can help security teams, and how to mitigate the risks they pose
The Hacker News
Google is expanding its Vulnerability Rewards Program (VRP) to reward researchers for discovering attack scenarios targeting generative artificial int
SecurityWeek
Google announces a bug bounty program and other initiatives for increasing the safety and security of artificial intelligence (AI)
Ars Technica
Altered images could destroy AI model training efforts that scrape art without consent.
Computerworld
President Biden is expected to announce new rules requiring government agencies to more fully assess AI tools to ensure they're safe and don't expose sensitive information. The government is also expected to loosen immigration policies for tech-savvy workers.
Ars Technica
Researchers say "most transparent" AI model scores only 54% on their index.
Ars Technica
Politeness and emphasis play a surprising role in AI-model communications.
SecurityWeek
Philippine defense chief ordered military personnel to stop using applications that use AI to create portraits, citing security risks.
SecurityWeek
The British startup is working on software to mitigate against the ‘wild west’ of unregulated AI apps harvesting company data at scale.
CSO
The AI-based risk assessment tool is the latest in a new wave of AI products sweeping into the security market.
Ars Technica
Is AI going to replace us all, or is it just humanity's newest tool?
Ars Technica
This troubling ability could be used by scammers or to target ads.
DarkReading
Once ethics guardrails are breached, generative AI and LLMs could become nearly unlimited in its capacity to enable evil acts, researchers warn.
CyberNews
AI and Bitcoin are a potentially perfect match as both technologies continue to scale
Ars Technica
Firefly 2 improves detail, Firefly Vector generates scalable vectors from a prompt.
Cyber Security News
In cybersecurity's evolution, generative AI models like ChatGPT, FraudGPT, and WormGPT bring innovation and new challenges.
Ars Technica
At an estimated 4 cents per ChatGPT query, OpenAI looks for cheaper AI chip solutions.
Cyber Security News
The maker of ChatGPT, OpenAI, is looking at making its own artificial intelligence chips, which are necessary for operating the highly popular chatbot.
Ars Technica
Broken guardrails for AI systems lead to push for new safety measures .
Ars Technica
Hanks and other celebrities have recently become targets of AI-powered ad scams.
Bleeping Computer
A set of critical vulnerabilities dubbed 'ShellTorch' in the open-source TorchServe AI model-serving tool impact tens of thousands of internet-exposed servers, some of which belong to large organizations.
Ars Technica
Adding fake watermarks to real images, evading current watermarking methods is not hard.
The Record
Researchers with Israeli firm Oligo published information about three critical issues with TorchServe, a part of the PyTorch project overseen by Amazon and Meta. The code helps companies build AI models into their businesses.
Ars Technica
"I'm sure it's a special love code that only you and your grandma know."
Ars Technica
WhatsApp, Instagram add animated AI chat avatars, including Snoop Dogg as dungeon master.
Ars Technica
Despite total lack of specifics, rumored collaboration has everyone guessing.
Ars Technica
Feature hopes to remove language barriers, but will speakers know if translations are faulty?
Ars Technica
Incorrect AI-generated answers are forming a feedback loop of misinformation online.
CyberSecurity Dive
The cloud giant is taking a full-stack approach to generative AI, which doubles down on security and reliable results.
Ars Technica
Image recognition and voice features aim to make the AI bot's interface more intuitive.
Ars Technica
Getty will indemnify customers against lawsuits and pay artists on "recurring basis."
Computerworld
Having a plan in place before deploying genAI for software product development and employee assistance tools is critical, says Navan CSO Prabhath Karanth. Otherwise, the threat potential is high.
Computerworld
ServiceNow's new chatbot works across applications and can summarize customer service interactions and perform case, incident, and agent chat summarizations; act as a virtual agent; and perform search functions.
Ars Technica
With better response to details and text, DALL-E 3 hopes to make prompt engineering obsolete.
CyberSecurity Dive
Reports from Gartner and Rackspace show a broad enterprise appetite to weave AI into the tool stack, especially across application security.
Cyber Security News
An interactive online malware analysis sandbox ANY.RUN has recently introduced a new ChatGPT AI-driven detection approach.
Ars Technica
Google admits that Bard isn't always accurate; ropes in Gmail through new Extensions.
SecurityWeek
Texas startup attracts major investor interest to build an MLMDR (machine learning detection and response) technology.
SecurityWeek
Venafi launched a proprietary generative AI model to help with the mammoth, complex, and expanding problem of managing machine identities.
SecurityWeek
Exposed data includes backup of employees workstations, secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages.
SecurityWeek
The US Department of Energy gives $39 million in funding for nine projects to advance the cybersecurity of distributed energy resources.
SecurityWeek
Cyber AI Summit will explore cybersecurity use-cases for artificial intelligence (AI) technology and the race to protect LLM algorithms from adversarial use.
Infosecurity News
SlashNext research shows that most of these tools connect to jailbroken versions of public chatbots
Ars Technica
"Some customers are concerned about the risk of IP infringement claims," says Microsoft.
Ars Technica
For $20 a month, Claude fans can get 5x higher usage limits, early access to new features.
Ars Technica
No detectors "reliably distinguish between AI-generated and human-generated content."
Ars Technica
In-person event will have livestreamed keynote, show company's "latest work."
Computerworld
In addition to controlling the use of ChatGPT and other standalone tools, companies now have to grapple with generative AI being built into the productivity apps their employees use every day.
The Hacker News
Cybercriminals are exploiting social media ads on Meta-owned Facebook for malware distribution. With fraudulent ads, they're targeting businesses and
Cyber Security News
The latest attack techniques, significant weaknesses, and exploits have all been highlighted. We also provide the most latest software upgrades available to keep your devices secure.
Ars Technica
Should AI-created works be copyrighted? US regulators want to know what you think.
Ars Technica
AI makes it cheap and easy to create propaganda at scale.
The Hacker News
ChatGPT and similar AI models are empowering cybercriminals to launch damaging attacks on online businesses. Learn how they're leveraging these tools
Infosecurity News
OpenAI has launched ChatGPT Enterprise highlighting high-profile customers including Klarna, PwC and The Estee Lauder Companies
Cyber Security News
ChatGPT has released a new enterprise version which is claimed to be SOC 2 compliant with Enterprise-grade security & higher-speed ChatGPT-4 access.
SecurityWeek
ChatGPT Enterprise promises “enterprise-grade security” and a commitment not to use prompts and company data to train AI models.
Cyber Security News
Cybersecurity analysts at Trend Micro, Europol, and UNICRI jointly studied criminal AI exploitation, releasing the "Malicious Uses and Abuses of Artificial Intelligence".
Ars Technica
Unlimited GPT-4, encryption, 32K context, and more. Will it become an essential tool?
Ars Technica
New weights-available coding model is free for research and commercial use.
The Record
Social media companies and other businesses have an obligation to protect users’ publicly available information from data scrapers that gather it for unintended purposes, an international group of privacy regulators said Thursday.
Ars Technica
Developers can now bring their own data to customize GPT-3.5 Turbo outputs.
Ars Technica
Meta aims for a universal translator like "Babel Fish" from Hitchhiker’s Guide.
Ars Technica
AI can be very easily harnessed to produce and disseminate misinformation.
Computerworld
Adoption of generative AI is happening at a breakneck pace, but potential threats posed by the technology will require organizations to set up guardrails to protect sensitive data and customer privacy — and to avoid running afoul of regulators.
SecurityWeek
Israel and US have announced plans to invest close to $4 million in projects to improve the security of critical infrastructure systems.
Infosecurity News
Experts welcome efforts to safeguard society from emerging technologies
Ars Technica
AI-penned Microsoft Travel article recommends food bank as a must-see destination.
SecurityWeek
Google sprinkles magic of generative-AI into its open source fuzz testing infrastructure and finds immediate success with code coverage.
Ars Technica
Official: "It is simply not feasible to read every book" for depictions of sex.
Ars Technica
"I swear I thought that was my wall."
The Record
The chairman of the Senate Select Committee on Technology renewed calls on Wednesday for the world’s leading artificial intelligence companies to prioritize safety and security in their products, saying that voluntary commitments recently agreed to by a range of companies fall short in reducing risks.
Ars Technica
The paper of record pokes holes in the absorb-everything AI business model.
Cyber Security News
Join us at Cyber Writes for our weekly Threat and Vulnerability Roundup, where we provide the latest updates on cybersecurity news. Keep yourself informed and stay ahead of potential threats with our comprehensive coverage.
Ars Technica
Restrictions don't apply to current OpenAI models, but will affect future versions.
DarkReading
Both threats to enterprises and career opportunities are being created by the escalation of generative AI and ChatGPT, warns Maria 'Azeria' Markstedter.
Ars Technica
Vague ToS previously implied that customer data could be used for AI training.
Infosecurity News
Enterprise usages of generative AI are what is going to turn the threat model of many organizations upside down, Maria Markstedter argued during her speech at Black Hat USA
Ars Technica
New Zealand grocery chain bot suggests harmful things when given silly ingredients.
Infosecurity News
The AI Cyber Challenge is sponsored by DARPA, Google, Microsoft, OpenAI, Anthropic and the Open Source Security Foundation
Security Affairs
The US Government House this week launched an Artificial Intelligence Cyber Challenge competition for creating a new generation of AI systems. On Wednesday, the United States Government House introduced an Artificial Intelligence Cyber Challenge competition. The two-year competition aims to foster the development of innovative AI systems that can protect critical applications from cyber threats. […]
DarkReading
A challenge will be offered to teams to build tools using AI in order to solve open source's vulnerability challenges.
SecurityWeek
The White House launched a competition for creating new artificial intelligence systems that can defend critical software from hackers.
Cyber Security News
Azure announced the global expansion of Azure OpenAI Service, including GPT-4 and GPT-35-Turbo, to its customers across the world.
CyberSecurity Dive
In partnership with OpenAI, Anthropic, Google and Microsoft, participants will have access to top AI companies’ technology for designing new cybersecurity solutions.
Ars Technica
Prepare for more cover songs like Johnny Cash singing Barbie Girl.
Ars Technica
Papal communiqué warns of AI produced "at the expense of the most fragile and excluded."
SecurityWeek
Microsoft has shared guidance and resources from its AI Red Team program to help organizations and individuals with AI security.
Ars Technica
Meta's suite of three AI models can create sound effects and music from descriptions.
CyberScoop
The deputy national security adviser for cyber and emerging technologies discusses how to mitigate AI's disinformation threat.
Ars Technica
Adversarial attack involves using text strings and may be unstoppable.
Infosecurity News
In an open letter, Senator Ron Wyden urged federal agencies to investigate Microsoft following a Chinese campaign that compromised US government emails
Infosecurity News
Four generative AI pioneers launched the Frontier Model Forum, which will focus on ‘safe and responsible’ creation of new AI models
Ars Technica
Research shows that any AI writing detector can be defeated—and false positives abound.
Infosecurity News
The tool can craft phishing emails, create undetectable malware and identify vulnerable sites
Ars Technica
OpenAI brings the popular AI language model to an official Android client app.
Ars Technica
Skeptics say Anthropic, Google, Microsoft and OpenAI hope to avoid regulation.
The Hacker News
FraudGPT, the latest cybercrime AI tool, is being sold on dark web marketplaces and Telegram channels.
The Hacker News
A new malware family called Realst is targeting Apple macOS systems, including macOS 14 Sonoma! Written in Rust programming language.
DarkReading
Researchers find artificial intelligence applications that use large language models could be compromised by attackers using natural language to dupe users.
DarkReading
The subscription-based, generative AI-driven offering joins a growing trend toward "generative AI jailbreaking" to create ChatGPT copycat tools for cyberattacks.
Bleeping Computer
The analysis of nearly 20 million information-stealing malware logs sold on the dark web and Telegram channels revealed that they had achieved significant infiltration into business environments.
Ars Technica
Beta feature allows ChatGPT to remember key details with less prompt repetition.
Infosecurity News
Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI all joined the initiative
Bleeping Computer
Cybercriminals are already utilizing and creating malicious tools based on open source AI language models for phishing and malware development. Learn more from Flare about how threat actors are beginning to use AI.
SecurityWeek
The White House said OpenAI and others in the AI race have committed to making technology safer with features such as watermarks on fabricated images.
CyberSecurity Dive
OpenAI, Microsoft and Google are among the companies committing to robust testing and investments in cybersecurity safeguards to defend AI models prior to release.
Ars Technica
"Genesis" will seek to assist journalists, not replace them—yet.
Infosecurity News
Study also finds LLMs are poor at detecting malicious code
Ars Technica
Either way, experts think OpenAI should be less opaque about its AI model architecture.
CSO
Open-source packages with large language model (LLM) capabilities have many dependencies that make calls to security-sensitive APIs, according to a new Endor Labs report.
Bleeping Computer
Threat actors are showing an increased interest in generative artificial intelligence tools, with hundreds of thousands of OpenAI credentials for sale on the dark web and access to a malicious alternative for ChatGPT.
SecurityWeek
Security awareness training doesn’t protect all industries and all people all the time, and social engineering is getting better.
Ars Technica
GPT-4's image capabilities can recognize certain individuals, according to NYT.
Ars Technica
A family of pretrained and fine-tuned language models in sizes from 7 to 70 billion parameters.
CSO
WormGPT presents itself as a black-hat alternative to GPT models, designed specifically for malicious activities, according to SlashNext.
Cyber Security News
This PentestGPT tool is wholly based on ChatGPT, and it helps the penetration testers perform several complicated security testing.
The Hacker News
A new generative AI cybercrime tool called WormGPT is making waves in underground forums. It empowers cybercriminals to automate phishing attacks.
Ars Technica
FTC sends 20-page info request over fears of "false, misleading, or disparaging" generations.
Ars Technica
xAI will feature veterans from DeepMind, Google, Microsoft, and Tesla.
Ars Technica
Dropping waitlist, devs can build the GPT-4 language model into their apps.
Ars Technica
AI models allegedly trained on books copied from popular pirate e-book sites.
Computerworld
The makers of ChatGPT have announced the company will be dedicating 20% of its compute processing power over the next four years to stop superintelligent AI from “going rogue."
Cyber Security News
ChatGPT to ThreatGPT - Generative AI Impact in Privacy. The evolution of organizational cybersecurity offers both power and threat.
Trend Micro
Since its initial release in late 2022, the AI-powered text generation tool known as ChatGPT has been experiencing rapid adoption rates from both organizations and individual users. However, its latest feature, known as Shared Links, comes with the potential risk of unintentional disclosure of confidential information.
CyberSecurity Dive
The collaboration will integrate Rubrik Security Cloud with Microsoft Sentinel and Azure OpenAI Service.
The Hacker News
Generative AI poses major security risks to enterprises. Threat actors can exploit it to hack weak SaaS authentication protocols, jeopardizing sensiti
Latest Hacking News
In the span of a year leading up to May 2023, over 100,000 stolen ChatGPT account credentials have been found on various dark web marketplaces. This alarming trend was discovered by researchers at Group-IB, who
The Hacker News
Over 100,000 OpenAI ChatGPT account credentials have been compromised and sold on the dark web. Cybercriminals are targeting the valuable information.
Ars Technica
Nonbinding EU draft AI law gets tougher, but it's still open to negotiation.
Infosecurity News
The European Parliament adopted the latest draft of the legislation with an overwhelming majority
Ars Technica
API updates include 4x larger conversation memory for GPT-3.5 and function calling.
Infosecurity News
Pink Drainer group has targeted hundreds of victims so far
Bleeping Computer
A hacking group tracked as 'Pink Drainer' is impersonating journalists in phishing attacks to compromise Discord and Twitter accounts for cryptocurrency-stealing attacks.
Ars Technica
Announcement does not include "a single voice from civil society or academia," says critic.
Infosecurity News
The capabilities will expedite content generation and enhance decision-making processes
ZDNet
AI Verify Foundation will develop test toolkits that mitigate the risks of AI.
Cyber Security News
From handling simple inquiries to instantly generating written works and even developing original software programs, including malware, ChatGPT proves to be an all-encompassing solution. However, this advancement also introduces the potential for a dangerous new cyber threat. Traditional security solutions such as EDRs harness multi-layered data intelligence systems to combat the highly sophisticated threats prevalent […]
Infosecurity News
Vulcan Cyber's Voyager18 research team called the technique
Computerworld
As generative AI revolutionizes tech, governments around the world are trying to come up with regulations that encourage its benefits while minimizing risks such as bias and disinformation.