Point AI

Powered by AI and perfected by seasoned editors. Every story blends AI speed with human judgment.

EXCLUSIVE

Runway AI Aleph review 2025: hands-on test of the new video editing model

Runway AI’s Aleph model edits video with text prompts
Runway-AI-Aleph-review
Subject(s):

Psst… you’re reading Techpoint Digest

Every day, we handpick the biggest stories, skip the noise, and bring you a fun digest you can trust.

Digest Subscription (In-post)

Runway AI has been gaining traction for a while, but it was the release of Aleph that finally made me stop and take a closer look. Unlike other generative tools that create media from scratch, Aleph does something different, it edits existing video footage using natural language prompts.

That alone raises plenty of questions. How accurate is it? Does it actually understand what’s happening in a scene? And can it really relight a shot, change a camera angle, or remove a character without wrecking the footage?

To find out, I tested it myself. I signed up, uploaded clips, and ran it through a series of tasks. But before starting, I also took time to understand how we got here, starting with Runway’s earlier models (Gen‑1 through Gen‑4) and how Aleph builds on that foundation.

This review covers everything: the tech behind it, how it compares to other tools, and what it’s like to actually 

PROMOTED

What Is Runway AI?

AD 4nXdNEP2wmL7KWtfOckWaWQ0oZj4Z0 6bGqaESaS752UYr8cBl9h6qqY46y0c82gngX80UdAw6dLDhE ZjXNNtGxlG T25Nts8RafyEh01Akm3k1L1yUfBTKvzFMm9JN

Runway AI is a creative platform built for generating and editing visual media using artificial intelligence. It launched in 2018, founded by Cristóbal Valenzuela, Anastasis Germanidis, and Alejandro Matamala, and has since become one of the most well-known names in AI video. Unlike tools that focus solely on image generation or animation, Runway’s focus has always been broader, enabling artists, editors, and content creators to generate, edit, and transform media, video especially, with as little friction as possible.

Over the years, Runway has introduced a range of models and tools that span text-to-video generation, image-to-video, and video-to-video editing. It’s been used by independent creators as well as major production studios. Films like Everything Everywhere All At Once and music videos for artists like A$AP Rocky and Kanye West have integrated Runway’s tech into their workflow, often for stylization or complex visual effects.

Today, the platform includes a growing suite of models and features. Users can generate brand-new scenes from scratch, stylize existing footage, or, with the release of Aleph, make detailed, contextual edits to real video clips just by describing what they want.

Runway is positioning itself as a tool for anyone working with visual media, be it narrative film, social content, client projects, or experimental work. Its goal is to speed up the editing process and unlock new creative possibilities through AI.

The evolution of Runway’s models: from Gen‑1 to Aleph

Runway’s approach to generative video has evolved quickly, with each generation of models pushing the limits of what’s possible in AI-powered media. Here’s how their technology has developed over time, and what each version brought to the table.

Gen‑1: the foundation (video-to-video transformation)

Released: Early 2023

Core idea: Take an existing video and apply a new style or look using another visual input.

Gen‑1 was Runway’s first big leap into AI video. It didn’t generate footage from scratch. Instead, it used one video as the source and another image or reference clip as the style guide.

You could input a short video and tell Gen‑1 to make it look like a claymation or a watercolor painting. It could also apply animation-like effects, cinematic lighting, or surreal textures, all while preserving motion and structure from the original footage. It was the first time creators could alter the feel of a video without reshooting or applying manual VFX.

At the time, Gen‑1 felt experimental and limited, but it set the groundwork for what came next. These models understood content beyond just surface style.

Gen‑2: the first true text-to-video model

Released: Mid-2023

Core idea: Generate short video clips directly from text prompts (with or without image references).

Gen‑2 took things a step further. Instead of needing a source video, you could type a prompt like “a futuristic city at night with flying cars” or “a person walking through a desert”, and it would generate a short animated video clip from scratch.

This was one of the first publicly available models that made full text-to-video generation accessible. You could also combine text with an image reference or depth map to guide the output more precisely.

Limitations still existed. Results often looked dreamlike or unstable, especially with complex motion, but Gen‑2 made it clear that Runway wasn’t just experimenting. It was actively building toward practical, prompt-driven filmmaking tools.

AD 4nXcDbStPadDfWWDShsjzX83yEvUmSAw6kV3sSgEq9 CdhFlBfz4eZRX6iPOmm3u2BU3aw

Gen‑3 alpha: more realism, smoother motion

Released: Late 2023

Core idea: Improve consistency, realism, and character detail across generated video.

With Gen‑3 Alpha, Runway significantly raised the quality bar. The model improved object consistency from frame to frame (fewer morphing errors), made characters more believable, and handled more complex prompts.

Motion became smoother. Characters could walk, turn, or perform gestures without as much distortion or flickering. Gen‑3 Alpha also made outputs look more like real cinematography, adding a sense of camera movement, depth, and scene logic that earlier versions struggled to maintain.

For creators, this meant fewer compromises. The footage could now be closer to what you’d expect from a stylized short film or commercial-grade animation.

Gen‑3 turbo: real-time generation at lower cost

Released: Early 2024

Core idea: Speed things up, much faster rendering with more efficient compute.

Gen‑3 Turbo wasn’t just an upgrade in quality; it was a shift in usability. This version was optimized for real-time or near-real-time generation, cutting down the lag between input and result.

By reducing processing time and computing costs, Runway made it easier for creators to iterate quickly, especially useful in testing scenes or building storyboards on the fly.

It didn’t change the look dramatically from Gen‑3 Alpha, but it made the entire workflow more responsive. That responsiveness laid the foundation for more interactive editing tools like Aleph.

Gen‑4: Real Consistency, Real Storytelling

Released: March 31, 2025

Core idea: Enable full-scene consistency with better control over characters, environments, and camera motion.

Gen‑4 is where Runway’s ambitions really started to align with traditional filmmaking needs. Unlike earlier models that struggled with continuity, Gen‑4 can maintain the same character across different prompts or shots. The same applies to objects, lighting setups, and environments.

That consistency opened the door for storytelling across multiple scenes, not just isolated clips. You could generate a full sequence where the same character walks from one room to another or change the camera angle of an ongoing action, all while keeping visual identity intact.

Another key feature in Gen‑4 was the ability to upload image references. For example, you could provide a headshot or scene sketch, then prompt the model to generate a video that follows that look and feel. Combined with control over virtual camera motion (like dolly-ins or crane shots), Gen‑4 allowed users to start thinking in full scenes rather than disconnected clips.

Aleph: in-context editing with natural language

Released: July 25, 2025 (Enterprise early access)

Core idea: Instead of generating new footage, Aleph edits real footage using prompts, like a conversation with an AI film editor.

This is the model that caught the industry’s attention. Aleph doesn’t replace your footage; it works with it. Upload a 5–10 second clip and type what you want: change the time of day, remove a person in the background, add fog, relight the room, switch the camera angle, make it look like it’s set in 1980s Tokyo. Aleph reads the context of the scene and applies those changes directly.

Unlike the earlier models, which relied on generating everything from scratch, Aleph focuses on in-context editing. It understands the logic of the footage: what’s in focus, where the light source is, who’s moving, and how that should all change based on your request.

Even better, it supports a chat-style interface. You can prompt it conversationally and it processes and outputs new variations with surprising coherence. And if you prefer more control, there’s a Tool Mode for prompt refinement and targeted adjustments.

It’s not perfect. Some outputs still struggle with fine detail or highly technical edits. But for many use cases like storyboarding, style testing, pitch decks, or experimental edits, Aleph already feels powerful enough to replace several steps of traditional post-production.

My Experience Using Runway AI

AD 4nXdVY1txYyEn6CT2kW6NBNjUqvZiRgU3A B7WC724LlCqEfvkZteAstinIAKoym5ws9ulfx7UX 1JnsRb92VIyqhdyLhH

Getting Started

Getting started with Runway AI was surprisingly smooth. I clicked the Sign Up button on the homepage, signed in with Google, and within a minute, I was in. No hoops, no setup delays.

Once inside, I was greeted with a chat-style interface that felt immediately familiar—something like this one you’re reading now. It’s simple, clean, and easy to navigate. I didn’t have to hunt for settings or tools; everything was clearly laid out and responsive.

Choosing a model was just as straightforward. Once I selected the project type, Runway pointed me toward the model options. It was clear where to go, what I was allowed to use, and what each model was built for. Uploading footage? Drag and drop, done. No compression drama, no failed uploads.

Runway clearly prioritizes usability and it shows from the very first interaction.

Interface & Workflow

The interface felt clean and intuitive from the start. From my own test run, internal navigation was easy, but I wanted to know if that experience matched the broader user sentiment. Based on current reviews, here’s how things stack up overall:

Intuitive Layout & Navigation

The dashboard is organized logically with tools like video-to-video, text-to-video, lip-sync, and background removal clearly labeled and even taggable as “favorites” for faster access. Most users praise Runway’s visual hierarchy and drag‑and‑drop ease for media uploads.

Chat Mode Feels Familiar

Runway’s chat‑style editing interface is modeled after other conversational AI tools, which made prompt input feel seamless in my case. It doesn’t drag you into menus or hidden settings.

Tool Mode Adds Depth, But Learner Curve Exists

Users report that the deeper editing options, like timelines and advanced filters are powerful, especially within the project workspace with green‑screen and tracking tools. But it can feel like jumping from chat to a VFX suite: capabilities are rich, but beginners may feel overwhelmed without guided tutorials.

Waiting Times & Render Queues

Unlimited plan users consistently reported slowdowns or queued rendering, even with unlimited access. One user said generating a 10‑second clip could take up to 20 minutes, and queuing options may be limited to one job at a time.

Others reported throttling after initial usage, despite “unlimited” being advertised, throughput dropped over time.

Prompt testing

Once I had my clip uploaded and Aleph ready, I moved into testing prompts. Runway’s chat interface makes it feel like you’re giving creative direction to an assistant. Just describe what you want, and it tries to make it happen. I wanted to see how well it understood context, motion, and detail, so I gave it a mix of practical and creative instructions.

Prompt 1: “Make the scene look like it’s set at sunset with warm, golden lighting.”

AD 4nXdk Ax0UuQYP2hc uxQIY2SYVrKdxusto52uaY7ZyvwKd193MISCuv6PXSMqE6vTZUV4AhFMAcwOtbuIM3 5c0HGPpjAWIUe2YBrSENH5pt68sOVRkp9UllB1AY2n0oTDO4H4B A

I chose this prompt because lighting changes are something editors and colorists often handle manually, and it can be time-consuming. I wanted to see if Aleph could simulate a natural-looking golden hour effect on a daytime clip without washing it out or creating weird shadows.

The result was better than expected. The scene shifted gently into warmer tones, with golden highlights on surfaces and a softer overall hue. It didn’t just slap a filter over the footage, it adjusted the ambient lighting and gave the entire frame that subtle orange tint you’d get just before sundown. Skin tones remained natural, and shadow areas deepened without turning muddy. Some small areas, like reflections on glass, didn’t fully catch the new light direction, but it wasn’t distracting. For something done with a single prompt, it felt impressively close to what you’d get with careful color grading.

Prompt 2: “Add soft falling snow to the background without changing the subject.”

AD 4nXdd5bR oUCDjOWuOKaH4BSldSu8gf9Ke4YqqE1RTLasWVVIduxTxAeEbrRNND0u0rIXkao9MpcMLAAqimTopnIclGLl69absYwB5kWo717yUBHRvE0kahkz IiWV2dGwo4ylfdVDg

Next, I tested how well it could layer new atmospheric elements without ruining the original focus. I imagined a daytime clip of someone walking outdoors, and I wanted to turn it into a quiet winter moment.

The snow appeared convincingly, soft, slow-falling, and positioned mostly in the background. The subject stayed crisp and unaffected, which told me the model was paying attention to depth. I didn’t have to specify masking or layering; it understood that I wanted to add ambience, not bury the subject in flakes. Occasionally, a snow particle clipped oddly near the edges of the frame, and I noticed a bit of ghosting when the subject moved fast, but overall, it nailed the mood. The entire scene suddenly felt colder and quieter, like a winter afternoon.

Prompt 3: “Change the camera angle to a wide shot of the room.”

This one was the wild card. I wanted to test how far Aleph could push shot composition. The original clip was a medium shot, and I asked for a wide shot, something traditionally impossible without actually reshooting.

To my surprise, it worked. The camera “pulled back” a little, revealing more of the room while keeping the subject in focus. It wasn’t a perfect reconstruction. b

Background elements that weren’t originally visible were clearly AI‑generated, and you could tell in a few spots. But the composition held together. Perspective lines, depth cues, and lighting matched closely enough that it felt usable. For concept work or pitching a visual idea, it was more than enough.

Each of these prompts gave me a different kind of insight into what Runway’s models are capable of. They’re not flawless, but they’re way past the novelty stage. The model understands structure, depth, and style in a way that feels surprisingly human. It doesn’t just apply filters; it interprets what you’re trying to say.

What surprised me (and what could be better)

A few things genuinely caught me off guard in a good way.

What surprised me

  • It understood tone, not just color: When I asked for sunset lighting, it didn’t just overlay orange; it softened the scene, adjusted shadows, and kept skin tones natural.
  • Layered effects were smart: Adding snow didn’t mess with the subject. The flakes stayed in the background, and the depth felt convincing.
  • Camera perspective shifts actually worked:  The wide-shot prompt was what really sold me. The model pulled back the frame, showed more of the space, and kept it grounded. Sure, it filled in some background details with AI guesses, but it didn’t feel like a hallucinated mess.
  • It felt like creative collaboration:  The model wasn’t just taking commands, it was responding in a way that felt intentional. Almost like it understood what I was aiming for, not just what I typed.

But it’s not all smooth sailing; there are still some edges that need sanding down.

What could be better

  • Access is limited: Aleph is still restricted to select users or enterprise teams. If you’re not in that group, you’ll either wait in line or make do with older models.
  • Rendering isn’t always fast: Even on paid plans, users report delays, throttling, and long queues. For a tool built on speed and iteration, waiting 10–20 minutes per clip can be frustrating.
  • There’s a cap on reliability: Sometimes, it just misses. You’ll see flickering shadows, odd textures, or motion glitches. It doesn’t happen every time, but when it does, you remember you’re working with a model.
  • It’s not a full editor, Yet: As powerful as it is, Runway still isn’t a complete substitute for manual editing. There’s no timeline control for layering effects or making frame-level tweaks.

Even with those limitations, Runway impressed me. It’s not perfect, but it feels like a glimpse of how editing will work a few years from now, faster, smarter, and more conversational.

Who is Runway AI for?

Runway AI isn’t just a novelty; it’s designed for people who work with video regularly and want faster, more flexible ways to create, edit, or iterate. That includes filmmakers, editors, animators, and content creators, but the real value shows up in how they use it.

If you’re building moodboards or pitch decks, Runway can help you create polished visuals and sample footage without a full shoot. If you’re testing visual styles for a music video, commercial, or short film, it lets you experiment with lighting, tone, and even camera movement in minutes. For social media content, it can help produce high-impact visuals at speed, especially for brands with tight turnarounds.

Even production teams in film and advertising have started folding Runway into their early-stage workflows, especially when prototyping scenes, testing compositions, or generating visuals for concept approval. You don’t need to know motion design or VFX to get results. That’s what sets it apart.

At the same time, it’s not a total replacement for professional editing tools, not yet. The output still has occasional artifacts, and long-form control is limited. But as a visual sketchpad or ideation tool, it’s surprisingly effective.

Runway is ideal for anyone who needs to visualize ideas quickly without pulling an entire team together or waiting for post-production.

Pricing & access

Runway AI uses a credit-based subscription model, where credits are spent on content generation and editing.

Free plan (basic)

  • Cost: $0 / editor per month (forever)
  • Includes:
    • One-time 125 credits (e.g. 25 seconds of Gen‑4 Turbo or Gen‑3 Alpha Turbo)
    • Up to 720p resolution, watermark on exports
    • Limited to 3 video projects and 5 GB asset storage 

Standard plan

  • $12/month per user, billed annually ($144/yr) 
  • 625 credits/month (refreshed each month)
    • ~52 seconds of Gen‑4 Turbo or ~125 sec of Gen‑3 Alpha Turbo using paid rates 
  • Full model access, including Aleph editing, Gen‑4 video and image generation, Act‑Two performance capture
  • Features include watermark removal, upscaling, 100 GB asset storage, and unlimited video editing projects 

Pro plan

  • $28/month per user, billed annually ($336/yr)
  • 2,250 credits/month
  • Adds custom voice synthesis, ProRes export, PNG sequence output, 500 GB storage, priority compute queue 

Unlimited plan

  • $76/month per user, billed annually ($912/yr) 
  • Unlimited video generations in Explore Mode at reduced credit rate
  • Includes all Pro-tier features with relaxed rate access to multiple models like Gen‑3 Alpha, Gen‑4 Turbo, and Act-One 

Enterprise plan

  • Custom pricing, tailored for teams or studios
  • Includes all Unlimited benefits plus SSO, custom credit volume, advanced security, workspace analytics, priority support, and single sign-on 

How credits map to use

  • Models cost credits by second or image: e.g. Gen‑4 Turbo ~5 credits/second; Gen‑4 Image ~5–8 credits per image (depending on resolution)
  • Example: on Standard, 625 credits/month translates to ~52 seconds of Gen‑4 Turbo or ~125 seconds of Gen‑3 Alpha Turbo
  • Explore Mode on Unlimited significantly lowers cost, but fully depends on plan limits

The Standard plan unlocks Aleph editing and provides enough credits for light-to-moderate use. Unlimited is ideal for creators exploring many experiment variations or working at scale.

Access to Gen‑4 vs Aleph

Access to Runway’s models depends on your subscription tier and in Aleph’s case, your account type.

Gen‑4 (Turbo)

  • Available to all paid plans,  including Standard, Pro, Unlimited, and Enterprise
  • Free users do not get access to Gen‑4 for video generation
  • You can generate clips in Gen‑4 Turbo with your monthly credits or in Explore Mode (Unlimited plan)

Gen‑4 is Runway’s flagship text-to-video model. If you’re paying for a plan, you can use it right away to generate high-quality, coherent clips from scratch with text or image prompts.

Aleph

  • Currently in Early Access only
  • Available to Enterprise clients and members of the Creative Partner Program
  • Not included in Free, Standard, Pro, or Unlimited public plans (yet)

Aleph is Runway’s most advanced model, but it hasn’t rolled out to the general public. As of now, it’s only available to select partners, usually studios, agencies, or high-volume creators who are part of an approved Enterprise or Creative Partner account.

If you’re not in one of those groups, you can request access by contacting Runway or applying through the Creative Partner Program.

So if you’re on a regular paid plan, you’ll get full access to Gen‑4, but you’ll have to wait for Aleph, unless you qualify for the Enterprise track.

How to request access to Aleph

If you’re not part of an Enterprise team or Creative Partner Program, you can still express interest in Aleph by filling out Runway’s access request form.

To do that:

  • Visit the official Aleph page: runwayml.com/research/introducing-runway-aleph
  • Scroll down and click “Join the waitlist” or “Request access”
  • You’ll need to provide details about your creative work, your team, and how you plan to use the tool

At the time of writing, Runway hasn’t shared a public release date for Aleph outside of early-access programs. If you’re working in video professionally, especially in film, advertising, or branded content, you’ll likely have a better chance of being approved.

Final verdict: Is Runway AI worth it?

Runway AI has come a long way from experimental style transfers to advanced, prompt-driven video editing, and it’s clear the platform is still evolving fast. Gen‑4 delivers impressive consistency, motion, and cinematography, especially for a model generating everything from scratch. For content creators, indie filmmakers, and digital artists, it’s a powerful way to prototype ideas and visualize scenes without needing a crew or full production setup.

But Aleph is what really shifts the conversation. Instead of generating clips, it edits real footage in a way that feels almost like directing with text. The control it offers like relighting, aging, restyling, and adding shots, is the kind of tech that’s hard to ignore if you work in video at any level. That said, access is still limited. Until Aleph is publicly available, most users will be working with Gen‑4 and earlier tools.

If you’re experimenting, pitching, or creating visual content on a tight budget or timeline, Runway is well worth exploring. It’s not a full replacement for traditional editing just yet, but it’s getting close and it’s already useful as a creative accelerator.

Once Aleph becomes more widely available, the line between editing and prompting might blur for good.

Wrapping up

Runway AI is building tools that could change how we approach video production, especially with the direction Aleph is headed. It’s not perfect, and access is still limited, but it already shows how much editing can be simplified with the right AI support.

If you’re working with video in any form, it’s worth exploring. Even just testing what Gen‑4 can do might shift how you think about your workflow.

Have you tried Runway AI yet? Share your experience in the comments, or drop any questions. I’m happy to chat.

You may also like;

Best AI apps for Generating imagesDavinci AI review 
Can chatGPT generate images?Best AI tools for photo editing 
5 AI art toolsMeta AI image generator review 

Follow Techpoint Africa on WhatsApp!

Never miss a beat on tech, startups, and business news from across Africa with the best of journalism.

Follow

Read next