Semua Kabar

Disney Pinnacle unveils Genesis Keys quest with Web3 support from Dapper Labs

Disney PinnaclebyDapper Labs, the digital pin collecting and trading platform, has now announced its Genesis Keys quest.

That means Disney fans worldwide can claim free Genesis Keys every four hours as part of the global quest for one-of-one Genesis Editions — the first digital pins ever minted on the platform.

The Genesis Keys campaign marks the first phase of Disney Pinnacle by Dapper Labs’ rollout gearing up for more action throughout the summer. As the community collectively claims Keys at 100K, 200K, and 300K milestones, Genesis Capsules containing ultra-rare digital pins will be released. Each Genesis Pin is uniquely serialized and features iconic Disney moments.

Dapper Labs is the Web3 company behind NBA Top Shot, NFL All Day, Disney Pinnacle, and the Flow blockchain. Since 2018, Dapper Labs has pioneered the industry in creating blockchain-based consumer experiences, working with major IP holders to bring digital ownership to fans worldwide. The Vancouver-based company has raised over $600 million from leading investors including Andreessen Horowitz, Coatue, and GV.

AMD debuts AMD Instinct MI350 Series accelerator chips with 35X better inferencing

AMD unveiled its comprehensive end-to-end integrated AI platform vision and introduced its open, scalable rack-scale AI infrastructure built on industry standards at its annual Advancing AI event.

The Santa Clara, California-based chip maker announced its new AMD Instinct MI350 Series accelerators, which are four times faster on AI compute and 35 times faster on inferencing than prior chips.

AMD and its partners showcased AMD Instinct-based products and the continued growth of the AMD ROCm ecosystem. It also showed its powerful, new, open rack-scale designs and roadmap that bring leadership Rack Scale AI performance beyond 2027.

“We can now say we are at the inference inflection point, and it will be the driver,” said Lisa Su, CEO of AMD, in a keynote at the Advancing AI event.

In closing, in a jab at Nvidia, she said, “The future of AI will not be built by any one company or within a closed system. It will be shaped by open collaboration across the industry with everyone bringing their best ideas.”

AMD unveiled the Instinct MI350 Series GPUs, setting a new benchmark for performance, efficiency and scalability in generative AI and high-performance computing. The MI350 Series, consisting of both Instinct MI350X and MI355X GPUs and platforms, delivers a four times generation-on-generation AI compute increase and a 35 times generational leap in inferencing, paving the way for transformative AI solutions across industries.

“We are tremendously excited about the work you are doing at AMD,” said Sam Altman, CEO of Open AI, on stage with Lisa Su.

He said he couldn’t believe it when he heard about the specs for MI350 from AMD, and he was grateful that AMD took his company’s feedback.

AMD demonstrated end-to-end, open-standards rack-scale AI infrastructure—already rolling out with AMD Instinct MI350 Series accelerators, 5th Gen AMD Epyc processors and AMD Pensando Pollara network interface cards (NICs) in hyperscaler deployments such as Oracle Cloud Infrastructure (OCI) and set for broad availability in 2H 2025. AMD also previewed its next generation AI rack called Helios.

It will be built on the next-generation AMD Instinct MI400 Series GPUs, the Zen 6-based AMD Epyc Venice CPUs and AMD Pensando Vulcano NICs.

“I think they are targeting a different type of customer than Nvidia,” said Ben Bajarin, analyst at Creative Strategies, in a message to GamesBeat. “Specifically I think they see the neocloud opportunity and a whole host of tier two and tier three clouds and the on-premise enterprise deployments.”

Bajarin added, “We are bullish on the shift to full rack deployment systems and that is where Helios fits in which will align with Rubin timing. But as the market shifts to inference, which we are just at the start with, AMD is well positioned to compete to capture share. I also think, there are lots of customers out there who will value AMD’s TCO where right now Nvidia may be overkill for their workloads. So that is area to watch, which again gets back to who the right customer is for AMD and it might be a very different customer profile than the customer for Nvidia.”

The latest version of the AMD open-source AI software stack, ROCm 7, is engineered to meet the growing demands of generative AI and high-performance computing workloads— while dramatically improving developer experience across the board. (Radeon Open Compute is an open-source software platform that allows for GPU-accelerated computing on AMD GPUs, particularly for high-performance computing and AI workloads). ROCm 7 features improved support for industry-standard frameworks, expanded hardware compatibility, and new development tools, drivers, APIs and libraries to accelerate AI development and deployment.

In her keynote, Su said, “Opennesss should be more than just a buzz word.”

The Instinct MI350 Series exceeded AMD’s five-year goal to improve the energy efficiency of AI training and high-performance computing nodes by 30 times, ultimately delivering a 38 times improvement. AMD also unveiled a new 2030 goal to deliver a 20 times increase in rack-scale energy efficiency from a 2024 base year, enabling a typical AI model that today requires more than 275 racks to be trained in fewer than one fully utilized rack by 2030, using 95% less electricity.

AMD also announced the broad availability of the AMD Developer Cloud for the global developer and open-source communities. Purpose-built for rapid, high-performance AI development, users will have access to a fully managed cloud environment with the tools and flexibility to get started with AI projects – and grow without limits. With ROCm 7 and the AMD Developer Cloud, AMD is lowering barriers and expanding access to next-gen compute. Strategic collaborations with leaders like Hugging Face, OpenAI and Grok are proving the power of co-developed, open solutions. The announcement got some cheers from folks in the audience, as the company said it would give attendees developer credits.

AMD customers discussed how they are using AMD AI solutions to train today’s leading AI models, power inference at scale and accelerate AI exploration and development.

Meta detailed how it has leveraged multiple generations of AMD Instinct and Epyc solutions across its data center infrastructure, with Instinct MI300X broadly deployed for Llama 3 and Llama 4 inference. Meta continues to collaborate closely with AMD on AI roadmaps, including plans to leverage MI350 and MI400 Series GPUs and platforms.

Oracle Cloud Infrastructure is among the first industry leaders to adopt the AMD open rack-scale AI infrastructure with AMD Instinct MI355X GPUs. OCI leverages AMD CPUs and GPUs to deliver balanced, scalable performance for AI clusters, and announced it will offer zettascale AI clusters accelerated by the latest AMD Instinct processors with up to 131,072 MI355X GPUs to enable customers to build, train, and inference AI at scale.

Microsoft announced Instinct MI300X is now powering both proprietary and open-source models in production on Azure.

HUMAIN discussed its landmark agreement with AMD to build open, scalable, resilient and cost-efficient AI infrastructure leveraging the full spectrum of computing platforms only AMD can provide.Cohere shared that its high-performance, scalable Command models are deployed on Instinct MI300X, powering enterprise-grade LLM inference with high throughput, efficiency and data privacy.

In the keynote, Red Hat described how its expanded collaboration with AMD enables production-ready AI environments, with AMD Instinct GPUs on Red Hat OpenShift AI delivering powerful, efficient AI processing across hybrid cloud environments.

“They can get the most out of the hardware they’re using,” said the Red Hat exec on stage.

Astera Labs highlighted how the open UALink ecosystem accelerates innovation and delivers greater value to customers and shared plans to offer a comprehensive portfolio of UALink products to support next-generation AI infrastructure.Marvell joined AMD to share the UALink switch roadmap, the first truly open interconnect, bringing the ultimate flexibility for AI infrastructure.

TensorWave deploys AMD Instinct MI355X GPUs in its cloud platform

TensorWave, a leader in AMD-powered AI infrastructure solutions, today announced the deployment of AMD Instinct MI355X GPUs in its high-performance cloud platform.

As one of the first cloud providers to bring the AMD Instinct MI355X to market, TensorWave enables customers to unlock next-level performance for the most demanding AI workloads—all with unmatched white-glove onboarding and support.

The new AMD Instinct MI355X GPU is built on the 4th Gen AMD CDNA architecture and features 288GB of HBM3E memory and 8TB/s memory bandwidth, optimized for generative AI training, inference, and high-performance computing (HPC).

TensorWave’s early adoption allows its customers to benefit from the MI355X’s compact, scalable design and advanced architecture, delivering high-density compute with advanced cooling infrastructure at scale.

“TensorWave’s deep specialization in AMD technology makes us a highly optimized environment for next-gen AI workloads,” said Piotr Tomasik, president at TensorWave, in a statement. “With the Instinct MI325X now deployed on our cloud and Instinct MI355X coming soon, we’re enabling startups and enterprises alike to achieve up to 25% efficiency gains and 40% cost reductions, results we’ve already seen with customers using our AMD-powered infrastructure.”

TensorWave’s exclusive use of AMD GPUs provides customers with an open, optimized AI software stack powered by AMD ROCm, avoiding vendor lock-in and reducing total cost of ownership. Its focus on scalability, developer-first onboarding, and enterprise-grade SLAs makes it the go-to partner for organizations prioritizing performance and choice.

“AMD Instinct MI350 series GPUs deliver breakthrough performance for the most demanding AI and HPC workloads,” said Travis Karr, corporate vice president of business development, Data Center GPU Business, AMD, in a statement. “The AMD Instinct portfolio, together with our ROCm open software ecosystem, enables customers to develop cutting-edge platforms that power generative AI, AI-driven scientific discovery, and high-performance computing applications.”

TensorWave is also currently building the largest AMD-specific AI training cluster in North America, advancing its mission to democratize access to high-performance compute. By delivering end-to-end support for AMD-based AI workloads, TensorWave empowers customers to seamlessly transition, optimize, and scale within an open and rapidly evolving ecosystem.

For more information please visit:

Cloud collapse: Replit and LlamaIndex knocked offline by Google Cloud identity outage

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy.Learn more

Days afterOpenAIandGoogle Cloudannounced a partnership to support the growing use of generative AI platforms, much of the AI-powered web and tools went down due to an outage of the leading cloud providers.

Google Cloud Service Platform (GCP) and someCloudflareservicesbegan experiencingissuesaround 10:00 a.m. PTtoday, affecting several AI development tools and data storage services, including ChatGPT and Claude, as well as a variety of other AI platforms.

We are aware of a service disruption to some Google Cloud services and we are working hard to get you back up and running ASAP.Please view our status dashboard for the latest updates:https://t.co/sT6UxoRK4R

A GCP spokesperson confirmed the outage to VentureBeat, urging users to check its public status dashboard.

GCP said affected services include API Gateway, Agent Assist, Cloud Data Fusion, Contact Center AI Platform, Google App Engine, Google BigQuery, Google Cloud Storage, Identity Platform, Speech-to-Text, Text-to-Speech and Vertex AI Search, among other tools. Google’s mobile development platform, Firebase, alsowent down.

VentureBeat staffers had trouble accessing Google Meet, but other Google services on Workspace remained online.

A Cloudflare spokesperson told VentureBeat only “a limited number of services at Cloudflare use Google Cloud and were impacted. We expect them to come back shortly. The core Cloudflare services were not impacted.”

Despite media reports and user-provided feedback on Down Detector,AWSstated thatits service remains up, including AI platforms such as Bedrock and Sagemaker.

OpenAI acknowledged some users had issues logging into their platforms but havesince resolved the problem.Anthropicnoted on itsstatus pagethat Claude experienced “elevated error rates on the API, console and Claude AI.”

We are aware of issues affecting multiple external internet providers, impacting the availability of our services such as single sign-on (SSO) and other log-in methods. Our engineering teams are working to mitigate these issues.Thank you for your continued patience.For the…

Developer tools likeLlamaIndex’s LlamaCloud,Weights & Biases,Windsurf,SupabaseandReplitreported issues. Other platforms likeCharacter AIalsoannouncedthey were affected.

Hi folks – LlamaCloud (https://t.co/DHMd6BFO0l) is currently down due to the ongoing global AWS/GCP/Firebase outage.We are closely monitoring the solution and will keep you posted when it's resolved!

? We're aware of the Google Cloud outage affecting various web services, including Weights & Biases products like W&B Models and@weave_wb.Our team is monitoring the situation and will provide updates.Thank you for your patience.

Our upstream cloud providers are currently experiencing a major outage. We are working as best we can to restore Replit services.

In addition to AI tools, other websites and internet services, such as Spotify and Discord, also reportedly went down.

In many ways, the outage highlights the challenges of relying on a single cloud service or database provider and the risks associated with an interconnected Internet. If one of your cloud services goes down, it could impact some users whose log-in or data stream is hosted there.

Google Cloud has been gradually wrestingmarket leadership in enterprise AIfrom its competitors, thanks to the large number of developer and database tools it has begun offering organizations. On the other hand, Cloudflare has beenpartnering with companieslike Hugging Face to deploy AI apps faster.

First reported byReuters, Google and OpenAI have struck a deal that will allow OpenAI to utilize Google Cloud to meet the growing demand on its platform.

But that’s not to say Google or Cloudflare may lose an edge among enterprise AI users who depend on consistent uptime. While the company continues to investigate the cause of the outage, enterprises often have, and should have, redundancies in case their provider goes down. Outages happen, and they happen far too frequently.

The last massive outage happened around the same time last year, in July, when CrowdStrikeaccidentally triggered outagesthat impacted Microsoft Windows users.

In typical fashion, many people saw the outages as an opportunity for comedy, or at least to catch up on tasks they’d been putting off.

much of the AI internet is down nowfirebase is down, cursor is down, lovable is down, supabase is down, google ai is down, cursor is down, aws is down… almost everything is down.finally time to catch up on the 87 tools, 14 models, and 12 AI startup ideas we want to build.

Thank GCP.I couldn’t find a reason to dip out of a couple meetings this afternoon and now I do!

So yes, it seems like the digital universe is giving everyone a forced break today!

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Thanks for subscribing. Check out moreVB newsletters here.

Meta’s new world model lets robots manipulate objects in environments they’ve never encountered before

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy.Learn more

While large language models (LLMs) have mastered text (and other modalities to some extent), they lack the physical “common sense” to operate in dynamic, real-world environments. This has limited the deployment of AI in areas like manufacturing and logistics, where understanding cause and effect is critical.

Meta’s latest model,V-JEPA 2, takes a step toward bridging this gap by learning a world model from video and physical interactions.

V-JEPA 2 can help create AI applications that require predicting outcomes and planning actions in unpredictable environments with many edge cases. This approach can provide a clear path toward more capable robots and advanced automation in physical environments.

Humans develop physical intuition early in life by observing their surroundings. If you see a ball thrown, you instinctively know its trajectory and can predict where it will land. V-JEPA 2 learns a similar “world model,” which is an AI system’s internal simulation of how the physical world operates.

model is built on three core capabilities that are essential for enterprise applications: understanding what is happening in a scene, predicting how the scene will change based on an action, and planning a sequence of actions to achieve a specific goal. As Meta states in itsblog, its “long-term vision is that world models will enable AI agents to plan and reason in the physical world.”

The model’s architecture, called the Video Joint Embedding Predictive Architecture (V-JEPA), consists of two key parts. An “encoder” watches a video clip and condenses it into a compact numerical summary, known as anembedding. This embedding captures the essential information about the objects and their relationships in the scene. A second component, the “predictor,” then takes this summary and imagines how the scene will evolve, generating a prediction of what the next summary will look like.

This architecture is the latest evolution of the JEPA framework, which was first applied to images withI-JEPAand now advances to video, demonstrating a consistent approach to building world models.

Unlike generative AI models that try to predict the exact color of every pixel in a future frame — a computationally intensive task — V-JEPA 2 operates in an abstract space. It focuses on predicting the high-level features of a scene, such as an object’s position and trajectory, rather than its texture or background details, making it far more efficient than other larger models at just 1.2 billion parameters

That translates to lower compute costs and makes it more suitable for deployment in real-world settings.

V-JEPA 2 is trained in two stages. First, it builds its foundational understanding of physics throughself-supervised learning, watching over one million hours of unlabeled internet videos. By simply observing how objects move and interact, it develops a general-purpose world model without any human guidance.

In the second stage, this pre-trained model is fine-tuned on a small, specialized dataset. By processing just 62 hours of video showing a robot performing tasks, along with the corresponding control commands, V-JEPA 2 learns to connect specific actions to their physical outcomes. This results in a model that can plan and control actions in the real world.

This two-stage training enables a critical capability for real-world automation: zero-shot robot planning. A robot powered by V-JEPA 2 can be deployed in a new environment and successfully manipulate objects it has never encountered before, without needing to be retrained for that specific setting.

This is a significant advance over previous models that required training data from theexactrobot and environment where they would operate. The model was trained on an open-source dataset and then successfully deployed on different robots in Meta’s labs.

For example, to complete a task like picking up an object, the robot is given a goal image of the desired outcome. It then uses the V-JEPA 2 predictor to internally simulate a range of possible next moves. It scores each imagined action based on how close it gets to the goal, executes the top-rated action, and repeats the process until the task is complete.

Using this method, the model achieved success rates between 65% and 80% on pick-and-place tasks with unfamiliar objects in new settings.

This ability to plan and act in novel situations has direct implications for business operations. In logistics and manufacturing, it allows for more adaptable robots that can handle variations in products and warehouse layouts without extensive reprogramming. This can be especially useful as companies are exploring the deployment ofhumanoid robotsin factories and assembly lines.

The same world model can power highly realistic digital twins, allowing companies to simulate new processes or train other AIs in a physically accurate virtual environment. In industrial settings, a model could monitor video feeds of machinery and, based on its learned understanding of physics, predict safety issues and failures before they happen.

This research is a key step toward what Meta calls “advanced machine intelligence (AMI),” where AI systems can “learn about the world as humans do, plan how to execute unfamiliar tasks, and efficiently adapt to the ever-changing world around us.”

Meta has released the model and its training code and hopes to “build a broad community around this research, driving progress toward our ultimate goal of developing world models that can transform the way AI interacts with the physical world.”

V-JEPA 2 moves robotics closer to the software-defined model that cloud teams already recognize: pre-train once, deploy anywhere. Because the model learns general physics from public video and only needs a few dozen hours of task-specific footage, enterprises can slash the data-collection cycle that typically drags down pilot projects. In practical terms, you can prototype a pick-and-place robot on an affordable desktop arm, then roll the same policy onto an industrial rig on the factory floor without gathering thousands of fresh samples or writing custom motion scripts.

Lower training overhead also reshapes the cost equation. At 1.2 billion parameters, V-JEPA 2 fits comfortably on a single high-end GPU, and its abstract prediction targets reduce inference load further. That lets teams run closed-loop control on-prem or at the edge, avoiding cloud latency and the compliance headaches that come with streaming video outside the plant. Budget that once went to massive compute clusters can fund extra sensors, redundancy, or faster iteration cycles instead.

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Thanks for subscribing. Check out moreVB newsletters here.

SAG-AFTRA board approves agreement with game companies on AI and new contract

TheScreen Actors Guild-American Federation of Television and Radio Artists(SAG-AFTRA) National Board approved the tentative agreement with the video game bargaining group.

The contract on terms for the Interactive Media Agreement will now be submitted to the membership for ratification.

The new contract accomplishes important guardrails and gains around AI, including the requirement of informed consent across various AI uses and the ability for performers to suspend informed consent for Digital Replica use during a strike.

If ratified, the agreement would provide compounded increases in performer compensation at a rate of 15.17% upon ratification plus additional 3% increases in November 2025, November 2026 and November 2027. Additionally, the overtime rate maximum for overscale performers will now be based on double scale. The health & retirement contribution rates to the SAG-AFTRA Health Plan will be raised from 16.5% to 17% upon ratification and to 17.5% in Oct. 2026.

Compensation gains include the establishment of collectively-bargained minimums for the use of Digital Replicas created with IMA-covered performances and higher minimums (7.5x scale) for “Real Time Generation,” i.e., embedding a Digital Replica-voiced chatbot in a video game. “Secondary Performance Payments” will also ensure compensation when visual performances are re-used in another videogame.

Essential new safety provisions were also secured, including a requirement for a qualified medical professional to be present or readily available at rehearsals and performances during which hazardous actions or working conditions are planned. Rest periods are now provided for on-camera principal performers and employers can no longer request that performers complete stunts or other dangerous activity in virtual auditions.

The spokesperson for the video game producers party to the Interactive Media Agreement, Audrey Cooling, said earlier this week in a statement, “We are pleased to have reached a tentative contract agreement that reflects the important contributions of SAG-AFTRA-represented performers in video games. This agreement builds on three decades of successful partnership between the interactive entertainment industry and the union.”

Cooling added, “It delivers historic wage increases of over 24% for performers, enhanced health and safety protections, and industry-leading AI provisions requiring transparency, consent and compensation for the use of digital replicas in games. We look forward to continuing to work with performers to create new and engaging entertainment experiences for billions of players throughout the world.”

The full terms of the three-year deal will be released with the ratification materials on Wednesday, June 18.

A tentative agreement was reached with the video game employers on June 9 and the strike was officially suspended on June 11.

Member informational meetings are being scheduled and additional details will be available atsagaftra.org/videogames2025in the coming days.

Eligible SAG-AFTRA members will have until 5 p.m. PDT on Wednesday, July 9, 2025 to cast their vote on ratification.

SAG-AFTRA represents approximately 160,000 actors, announcers, broadcast journalists, dancers, DJs, news writers, news editors, program hosts, puppeteers, recording artists, singers, stunt performers, voiceover artists and other entertainment and media professionals.

Gamefam brings FIFA Club World Cup 2025 to Roblox

Roblox game studioGamefamannounced today it is collaborating with FIFA to bring the FIFA Club World Cup 2025 to its game Super League Soccer. The two are holding a major event within the game leading up to the Club World Cup to raise the tournament’s profile with Roblox’s Gen Z and Alpha-aged audience. All 13 of the participating football clubs are playable in Roblox for the first time, with the event set to kick off on June 14 and running through July 13 alongside the real Club World Cup.

According to Gamefam, the virtual Club World Cup will mimic the real deal, with in-game ads and signage to show FIFA’s sponsors and virtual merch for players like branded items and cosmetics from both FIFA and Adidas. It will also follow the Club World Cup, with the in-game bracket updated as the real matches are completed.

Ricardo Briceno, Gamefam’s Chief Business Officer, told GamesBeat in an interview: “Working with FIFA isn’t just another brand activation for us. It’s uniquely meaningful. At Gamefam, we regularly have the privilege of collaborating with leading global IPs, but FIFA stands apart as the pinnacle of both global football and global sport. The weight of that legacy and the passion behind it make this partnership feel more like a cultural moment than a marketing campaign. This collaboration carries a depth and resonance that’s rare on Roblox, and it reflects our shared goal with FIFA: to redefine how the next generations fall in love with and experience the beautiful game.”

This is the second collaboration between Gamefam and FIFA. The first was in November, where they revealed the virtual Club World Cup Trophy reveal, which received 5.5 million visits and 70 million minutes of engagement in three days — the biggest soccer event in Roblox, according to the developer. 84% of players who attended the event said they planned to follow the tournament after this, and the Club World Cup activation gives them a chance to do so within Roblox.

In addition to the sponsors and the connection to the real-world tournament, the virtual Club World Cup gives players a chance to participate in their own tournament. They can earn points both by winning and by successfully completing challenges.

Briceno added, “With Gen Z & Alpha spending an average of 2.4 hours per day on Roblox, it’s no surprise that Roblox has become a magnet for sports IP: the NBA, NFL, NASCAR, and now FIFA have all recognized its potential to engage the next wave of fans. Moral of the story is… sports properties must activate on Roblox to ensure relevance for decades to come.”

Red team AI now to build safer, smarter models tomorrow

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy.Learn more

Editor’s note: Louis will lead an editorial roundtable on this topic at VB Transform this month.Register today.

AI models are under siege. With77%of enterprises already hit by adversarial model attacks and41%of those attacks exploiting prompt injections and data poisoning, attackers’ tradecraft is outpacing existing cyber defenses.

To reverse this trend, it’s critical to rethink how security is integrated into the models being built today. DevOps teams need to shift from taking a reactive defense to continuous adversarial testing at every step.

Protecting large language models (LLMs) across DevOps cycles requires red teaming as a core component of the model-creation process. Rather than treating security as a final hurdle, which is typical in web app pipelines, continuous adversarial testing needs to be integrated into every phase of the Software Development Life Cycle (SDLC).

Adopting a more integrative approach to DevSecOps fundamentals is becoming necessary to mitigate the growing risks of prompt injections, data poisoning and the exposure of sensitive data. Severe attacks like these are becoming more prevalent, occurring from model design through deployment, making ongoing monitoring essential.

Microsoft’s recent guidance onplanningred teaming for large language models (LLMs)and their applications provides a valuable methodology for startingan integrated process.NIST’s AI Risk Management Frameworkreinforces this, emphasizing the need for a more proactive, lifecycle-long approach to adversarial testing and risk mitigation. Microsoft’s recent red teaming of over 100 generative AI products underscores the need to integrate automated threat detection with expert oversight throughout model development.

As regulatory frameworks, such as the EU’s AI Act, mandate rigorous adversarial testing, integrating continuous red teaming ensures compliance and enhanced security.

OpenAI’sapproach to red teamingintegrates external red teaming from early design through deployment, confirming that consistent, preemptive security testing is crucial to the success of LLM development.

Traditional, longstanding cybersecurity approaches fall short against AI-driven threats because they are fundamentally different from conventional attacks. As adversaries’ tradecraft surpasses traditional approaches, new techniques for red teaming are necessary. Here’s a sample of the many types of tradecraft specifically built to attack AI models throughout the DevOps cycles and once in the wild:

Integrated Machine Learning Operations (MLOps) further compound these risks, threats, and vulnerabilities. The interconnected nature of LLM and broader AI development pipelines magnifies these attack surfaces, requiring improvements in red teaming.

Cybersecurity leaders are increasingly adopting continuous adversarial testing to counter these emerging AI threats. Structured red-team exercises are now essential, realistically simulating AI-focused attacks to uncover hidden vulnerabilities and close security gaps before attackers can exploit them.

Adversaries continue to accelerate their use of AI to create entirely new forms of tradecraft that defy existing, traditional cyber defenses. Their goal is to exploit as many emerging vulnerabilities as possible.

Industry leaders, including the major AI companies, have responded by embedding systematic and sophisticated red-teaming strategies at the core of their AI security. Rather than treating red teaming as an occasional check, they deploy continuous adversarial testing by combining expert human insights, disciplined automation, and iterative human-in-the-middle evaluations to uncover and reduce threats before attackers can exploit them proactively.

Their rigorous methodologies allow them to identify weaknesses and systematically harden their models against evolving real-world adversarial scenarios.

In short, AI leaders know that staying ahead of attackers demands continuous and proactive vigilance. By embedding structured human oversight, disciplined automation, and iterative refinement into their red teaming strategies, these industry leaders set the standard and define the playbook for resilient and trustworthy AI at scale.

As attacks on LLMs and AI models continue to evolve rapidly, DevOps and DevSecOps teams must coordinate their efforts to address the challenge of enhancing AI security. VentureBeat is finding the following five high-impact strategies security leaders can implement right away:

Taken together, these strategies ensure DevOps workflows remain resilient and secure while staying ahead of evolving adversarial threats.

AI threats have grown too sophisticated and frequent to rely solely on traditional, reactive cybersecurity approaches. To stay ahead, organizations must continuously and proactively embed adversarial testing into every stage of model development. By balancing automation with human expertise and dynamically adapting their defenses, leading AI providers prove that robust security and innovation can coexist.

Ultimately, red teaming isn’t just about defending AI models. It’s about ensuring trust, resilience, and confidence in a future increasingly shaped by AI.

I’ll be hosting two cybersecurity-focused roundtables at VentureBeat’sTransform 2025, which will be held June 24–25 at Fort Mason in San Francisco. Register to join the conversation.

My session will include one on red teaming,AI Red Teaming and Adversarial Testing, diving into strategies for testing and strengthening AI-driven cybersecurity solutions against sophisticated adversarial threats.

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Thanks for subscribing. Check out moreVB newsletters here.

The latest state of the game jobs market | Amir Satvat

Amir Satvatprovides a lot of job resources for games. He has built a big community of game people, and they are providing him with a lot of data. And here’s thelatest datafrom Amir Satvat’s Games Community and what it says about games hiring today, across functions, experience levels, and regions.

First,Satvat, who was honored for his work atThe Game Awards, said in aLinkedIn postthat hiring remains concentrated in the middle. This means that most roles, and role growth, is aimed at professionals with five to 15 years of experience. That’s where the bulk of open jobs and actual hires (even if the job description says otherwise) are happening.

Sadly, he noted that early career odds remain extremely low. Even if you’re willing to relocate globally, odds for new grads or early career professionals hover around 7% over 12 months. If you’re staying in North America, that drops to 2%. If you’re not in a major North America hub, that falls to 0.3%.

This pattern has flattened at 7%.

He noted that the categories of jobs are also very different when it comes to demand. Some games areas like narrative roles and business development are dramatically oversubscribed.

“Right now, we’re tracking 52 writing and narrative games roles globally (28 in North America) and 90 total business development games roles worldwide (just 10 for 10+ years of experience),” Satvat said. “When factoring in students, switchers, or unseen applicants, I can easily believe the demand-to-supply ratio for some functions, like these, is 20-30 times, or more.”

Satvat said that overall games hiring momentum is stable, but flattened. Games hiring velocity, which was improving a bit, has leveled off, while non-games roles continue rising, especially for adaptable skill sets.

Career switchers are intensifying competition.

“I now have enough data to say with confidence that middle to late career switchers, without any past games experience, are still actively pursuing the industry, further intensifying competition in already crowded functions,” Satvat said.

And he said layoffs may not be the biggest issue going forward.

“We still forecast 5,000 to 9,000 games layoffs this year. Long-term, global labor cost variances and AI may matter far more, with layoffs becoming a secondary concern,” he said.

If you’re a parent or mentor of a young person considering a games career, please be mindful of the data. “Why not try games?” can be a costly mindset if you’re not informed about the odds.

If you run a collegiate program, Satvat urges you to be transparent with prospective students. Game design, and subfields like narrative, are among the hardest areas to break into. Unfortunately, these are also the main areas from which graduating students seem to cross his desk. Offer broader skill development.

“I continue strongly to recommend non-games roles or retraining as a strong path forward, alongside applying to games,” he said.

✅ For those in games, we must be ready for a future that is likely to include shorter tenures, more project-based work, less remote opportunity, and higher mobility expectations.

✅ For anyone struggling to find a role in oversubscribed functions like games narrative or business development, please know this is a 20-30x+ structural issue. It’s not about your worth.

We’ll keep tracking data and help as best we can.

Satvat aslo recently announced that a new resource is finally here: theNew Games Role Workbook v1.0(Resource #8).

“This is the update I’ve waited three years to give you,” he wrote in a LinkedIn post. “Thanks to collaboration with Mayank Grover and the stellar team at Outscal, we have an improved resource of games and tech roles that will be refreshed twice a week, covering nearly 40,000 roles every three-month cycle, now delivered eight times a month.”

Why twice a week? Because after months of research, he found the critical window for applying to roles is within the first seven days. Anything slower was just not fast enough.

But there’s more. The raw data Mayank’s team pulls comes from many sources. So, just like he did for the original Games Jobs Workbook, he spent months in the background building a system to standardize all roles into 25 categories, based on community feedback and refined for usability.

Account ManagementAdministrative SupportAnimation & CinematicsArt & Tech ArtBusiness Development & SalesCustomer & Community SupportData & AnalyticsDesign & UXEngineering & DevelopmentFacilities & MaintenanceFinance & AccountingGeneral & MiscellaneousHR & RecruitingInternshipIT & SecurityLegal & ComplianceLocalization & TranslationMarketing & AdvertisingOperations & AdminProduction & ProductProject & Program ManagementStrategy & ConsultingTechnical SupportUser ResearchWriting & Narrative

This is standardized across all 38,000+ roles, both games and tech.

That means job seekers can now filter jobs easily across a consistent, logical set of categories. Every job has a direct link to apply, fully searchable, and structured to support your success.

“I’ll continue maintaining the original games jobs workbook as an encyclopedic view: total jobs by company, industry-wide scope, and macro stats. I will use this data to help Mayank ensure we have all companies tracked too,” he said.

But this new workbook is, now, what he recommends using for active job hunting. This is because the team has finally solved (thanks to Mayank’s team) the frequency problem and (with my efforts) the categorization problem that allows equivalent functionality to the Games Jobs Workbook

A resource with fresh roles updated twice a week, now with categorization, smart filters, games and tech roles, and full apply links at a role and location line item level?

He offered his deepest thanks to Mayank Grover and the Outscal team for this incredible collaboration. This wouldn’t be possible without them.

Senator’s RISE Act would require AI developers to list training data, evaluation methods in exchange for ‘safe harbor’ from lawsuits

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy.Learn more

Amid an increasingly tense and destabilizing week for international news, it should not escape any technical decision-makers’ notice that some lawmakers in the U.S. Congress are still moving forward with new proposed AI regulations that could reshape the industry in powerful ways — and seek to steady it moving forward.

Case in point, yesterday,U.S. Republican Senator Cynthia Lummis of Wyomingintroduced the Responsible Innovation and Safe Expertise Act of 2025 (RISE), thefirst stand-alone bill that pairs a conditional liability shield for AI developers with a transparency mandateon model training and specifications.

As with all new proposed legislation, both the U.S. Senate and House would need to vote in the majority to pass the bill and U.S. President Donald J. Trump would need to sign it before it becomes law, a process which would likely take months at the soonest.

“Bottom line: If we want America to lead and prosper in AI, we can’t let labs write the rules in the shadows,” wroteLummis on her account on X when announcing the new bill. We need public, enforceable standards that balance innovation with trust. That’s what the RISE Act delivers. Let’s get it done.”

It also upholds traditional malpractice standards for doctors, lawyers, engineers, and other “learned professionals.”

If enacted as written, the measure would take effect December 1 2025 and apply only to conduct that occurs after that date.

The bill’s findings section paints a landscape of rapid AI adoption colliding with a patchwork of liability rules that chills investment and leaves professionals unsure where responsibility lies.

Lummis frames her answer as simple reciprocity: developers must be transparent, professionals must exercise judgment, and neither side should be punished for honest mistakes once both duties are met.

In a statement on her website,Lummis calls the measure“predictable standards that encourage safer AI development while preserving professional autonomy.”

With bipartisan concern mounting over opaque AI systems, RISE gives Congress a concrete template: transparency as the price of limited liability. Industry lobbyists may press for broader redaction rights, while public-interest groups could push for shorter disclosure windows or stricter opt-out limits. Professional associations, meanwhile, will scrutinize how the new documents can fit into existing standards of care.

Whatever shape the final legislation takes, one principle is now firmly on the table: in high-stakes professions, AI cannot remain a black box. And if the Lummis bill becomes law, developers who want legal peace will have to open that box—at least far enough for the people using their tools to see what is inside.

RISE offers immunity from civil suits only when a developer meets clear disclosure rules:

The developer must also publish known failure modes, keep all documentation current, and push updates within 30 days of a version change or newly discovered flaw. Miss the deadline—or act recklessly—and the shield disappears.

The bill does not alter existing duties of care.

The physician who misreads an AI-generated treatment plan or a lawyer who files an AI-written brief without vetting it remains liable to clients.

The safe harbor is unavailable for non-professional use, fraud, or knowing misrepresentation, and it expressly preserves any other immunities already on the books.

Daniel Kokotajlo, policy lead at the nonprofit AI Futures Project and a co-author of the widely circulated scenario planning documentAI 2027, took tohis X accountto state that his team advised Lummis’s office during drafting and “tentatively endorse[s]” the result. He applauds the bill for nudging transparency yet flags three reservations:

The AI Futures Project views RISE as a step forward but not the final word on AI openness.

The RISE Act’s transparency-for-liability trade-off will ripple outward from Congress straight into the daily routines of four overlapping job families that keep enterprise AI running. Start with the lead AI engineers—the people who own a model’s life cycle. Because the bill makes legal protection contingent on publicly posted model cards and full prompt specifications, these engineers gain a new, non-negotiable checklist item: confirm that every upstream vendor, or the in-house research squad down the hall, has published the required documentation before a system goes live. Any gap could leave the deployment team on the hook if a doctor, lawyer, or financial adviser later claims the model caused harm.

Next come the senior engineers who orchestrate and automate model pipelines. They already juggle versioning, rollback plans, and integration tests; RISE adds a hard deadline. Once a model or its spec changes, updated disclosures must flow into production within thirty days. CI/CD pipelines will need a new gate that fails builds when a model card is missing, out of date, or overly redacted, forcing re-validation before code ships.

The data-engineering leads aren’t off the hook, either. They will inherit an expanded metadata burden: capture the provenance of training data, log evaluation metrics, and store any trade-secret redaction justifications in a way auditors can query. Stronger lineage tooling becomes more than a best practice; it turns into the evidence that a company met its duty of care when regulators—or malpractice lawyers—come knocking.

Finally, the directors of IT security face a classic transparency paradox. Public disclosure of base prompts and known failure modes helps professionals use the system safely, but it also gives adversaries a richer target map. Security teams will have to harden endpoints against prompt-injection attacks, watch for exploits that piggyback on newly revealed failure modes, and pressure product teams to prove that redacted text hides genuine intellectual property without burying vulnerabilities.

Taken together, these demands shift transparency from a virtue into a statutory requirement with teeth. For anyone who builds, deploys, secures, or orchestrates AI systems aimed at regulated professionals, the RISE Act would weave new checkpoints into vendor due-diligence forms, CI/CD gates, and incident-response playbooks as soon as December 2025.

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Thanks for subscribing. Check out moreVB newsletters here.