Introducing BVX: VeyoLabs' Proprietary Image Enhancement Engine — 4X Upscale, ReSkin, and Outpaint in One Tool
A deep technical breakdown of the three pillars of BVX: BVX4 (hyper-upscaling to 16K), BVX8 (face & skin reconstruction), and Expand (AI outpainting) — and how they work inside the new standalone Image Upscaler.
Introducing BVX: VeyoLabs' Proprietary Image Enhancement Engine
Today we're releasing the BVX Image Upscaler — a standalone tool inside VeyoLabs that brings three of our most powerful image processing capabilities together in a single, focused interface.
BVX stands for Beyond Vision X. It's the umbrella for the proprietary enhancement pipeline we've been building into Vision Studio's canvas for over a year. Now it lives as its own tool, accessible directly from your dashboard — no canvas required.
This article is a full technical and creative breakdown of what BVX actually does, how each mode works, and why we built it this way.
The Problem BVX Solves
AI-generated images have a resolution ceiling problem.
Even the best models top out at around 1024×1024 or 1536×1536 natively. When you take those outputs and try to print them large-format, place them in high-resolution video, or export them for commercial use, they fall apart. The pixelation is obvious. The skin tones become plastic. The fine details — fabric weave, hair strands, skin pores — disappear into compression blur.
Traditional upscaling tools (Topaz, Gigapixel, waifu2x) use interpolation algorithms to enlarge pixels. They're better than nothing. But they fundamentally can't invent the missing detail — they can only extrapolate from what's already there.
BVX takes a different approach: proprietary neural reconstruction, trained end-to-end on high-fidelity image pairs.
Instead of enlarging what exists, BVX passes the image through a multi-stage reconstruction pipeline driven by VeyoLabs' internally trained enhancement models. These models don't stretch pixels — they have learned to understand image structure at a perceptual level and reconstruct the detail that was always implied by the original shading but never rendered.
The result is output that doesn't look "upscaled." It looks like it was captured that way from the start.
BVX4 — 4X Hyper-Upscale to True 16K
BVX4 is the core upscaler. It takes any image — a generation, a photograph, a scan — and reconstructs it at true 16K equivalent resolution.
What it does technically
When you trigger BVX4, the image enters a proprietary multi-stage reconstruction pipeline. VeyoLabs' internal enhancement model — trained on millions of matched low-resolution / ultra-high-resolution pairs — performs the following operations in a single learned pass:
- Artifact suppression: elimination of JPEG blocking, quantisation noise, and ringing at sub-pixel precision
- Edge reconstruction: high-frequency sharpening guided by learned edge priors, not generic convolution kernels
- Micro-texture synthesis: fabric weave, skin pore topology, hair strand separation, and surface grain are recovered from contextual cues in the original signal
- Dynamic range expansion: per-channel colour depth enhancement using the model's learned photometric priors
- Geometry lock: structural identity, pose, and spatial relationships are frozen via the pipeline's preservation layer throughout the reconstruction pass
The pipeline runs with a consistency weight of 0.9 — meaning 90% of each output pixel is anchored to the original signal, with 10% contributed by the model's learned reconstruction prior. This ratio was determined empirically to be the inflection point between geometric fidelity and perceptual richness: below it, fine detail remains underrepresented; above it, the output begins drifting from the source identity.
What it looks like in practice
The difference is most visible in:
- Portrait photography — skin becomes tactile. You can see the natural variation in pore density, subtle under-eye shadow, the individual lashes that were previously blurred together.
- Architectural shots — brickwork, concrete texture, window reflections become physically accurate instead of smudged.
- Cinematic frames — film grain resolves into crisp detail. Fabric textures stop looking painted and start looking woven.
- Text and logos — edges that were anti-aliased into illegibility become clean and sharp.
When to use BVX4
Use BVX4 whenever you need to:
- Prepare an AI generation for large-format print
- Export a creative asset for commercial licensing (brands require minimum resolutions)
- Upscale a reference image before using it as a base for further generation
- Fix a great composition that came out at too low a resolution
BVX8 — Face & Skin Ultra-Reconstruction
BVX8 is the ReSkin engine. It's purpose-built for human subjects — portraits, characters, avatars, and any image where faces and skin are the primary subject.
The Uncanny Valley problem
AI-generated faces have improved dramatically, but they still fail in a specific and recognisable way: the skin looks processed. It's too smooth, too uniform, too perfect in a way that reads as digital rather than human. Real skin has variation — subtle discolouration, pore structure, micro-shadows, the faint shine of natural sebum, the soft edge of hair follicles.
BVX8 is the fix for this.
What it reconstructs
BVX8's proprietary dermal reconstruction model — trained specifically on high-resolution facial photography datasets — targets the following layers of the human face:
- Pore structure — restores natural variation in pore density across the face, following the anatomical pattern (larger on the nose, finer around the eyes)
- Micro-texture layers — subtle wrinkles, natural freckles, the texture of the lip surface, the precise edge definition of the eyelid fold
- Subsurface scattering — the way light passes through skin and bounces back, giving the appearance of depth and warmth rather than a flat matte surface
- Hair edge definition — individual eyelashes, eyebrow hairs, and scalp hairline rendered at pixel-level precision
- Eye specular — the wet shimmer of the eye, the reflected light in the iris, the precise edge of the sclera
All of this is done while explicitly preserving identity — bone structure, facial proportions, expression, and skin tone are locked. BVX8 enhances the face without changing the person.
What it doesn't touch
By design, BVX8 ignores everything outside the face and skin:
- Clothing, jewellery, accessories — unchanged
- Background environment — unchanged
- Lighting direction and colour grading — preserved exactly
- Pose and composition — untouched
This makes BVX8 safe to run on any portrait without fear of unwanted changes to the rest of the image.
Model choice matters here
BVX8 runs on whichever BVX model you select:
- BV4X (Vision Pro) — The highest-quality reconstruction. Takes 30–60 seconds. Best for final output, portfolio work, and commercial assets.
- BV8X (Vision Flash) — Faster reconstruction at slightly reduced micro-detail. Takes 15–25 seconds. Best for rapid iteration, preview runs, and non-commercial uses.
Expand — AI Outpainting: Scene Extension Beyond the Frame
Expand is the most visually dramatic of the three BVX modes. It takes your image and extends the world beyond the edges of the frame — generating new environment, background, and spatial context that seamlessly continues the original scene.
What outpainting actually means
Standard image generation fills a canvas. Outpainting — real outpainting, not just adding a blurred gradient around the edges — means the model must understand the spatial logic of the existing scene and extrapolate it outward in a way that obeys the same perspective, lighting, colour grading, and atmospheric properties.
Bad outpainting looks pasted. Good outpainting makes the original frame look like a crop from a much larger original.
BVX Expand targets the second category.
How Expand works
BVX Expand runs a spatial-coherence model trained on large-scale scene understanding datasets. It performs the following steps internally:
- Perspective geometry extraction — the model reconstructs the scene's 3D spatial layout: horizon line, vanishing point positions, and camera focal plane, all without user input
- Photometric sampling — colour temperature, shadow direction, and tonal distribution are extracted from the existing image and used to constrain all newly generated regions
- Semantic continuation — ground planes, sky gradients, architectural geometry, foliage density, and atmospheric depth are extrapolated following the scene's physical logic
- 4× canvas extension — the original image is embedded at the centre of a new 4× canvas, with all four borders filled by the continuation model at matching resolution
- Identity preservation lock — a hard geometric constraint prevents any modification to the content that existed in the original frame
The result is an image that feels like you zoomed out from the original crop. The new environment doesn't look generated — it looks discovered.
Creative applications for Expand
- Establishing shots from portrait crops — Take a tight character portrait and reveal the world they're standing in
- Environment design — Generate a small reference environment and extend it into a full scene
- Product placement — Take a product shot and place it into an extended branded environment without re-generating from scratch
- Storyboard expansion — Generate a key frame and then expand it to reveal context around the central action
- Print-ready backgrounds — Take a hero image and expand it to fill a widescreen or large-format canvas
The BVX Model System: BV4X vs BV8X
Across all three modes, you choose between two underlying AI models. We've branded them to reflect their role in the BVX system:
BV4X — Vision Pro
- Best for: Maximum quality, commercial output, final delivery
- Characteristic: Highest micro-detail fidelity, slowest processing
- Display: Purple badge in the model selector
- Typical processing time: 30–60 seconds depending on image complexity
BV8X — Vision Flash
- Best for: Speed, iteration, preview quality
- Characteristic: Strong quality, faster output, slightly reduced micro-texture depth
- Display: Amber badge in the model selector
- Typical processing time: 15–30 seconds
The model naming convention matches the one you already see in Vision Studio's canvas editing popup — so if you're familiar with BV4X and BV8X from the canvas toolbar, the upscaler uses exactly the same underlying infrastructure.
Creative Directives — Steering the Enhancement Engine
Each BVX mode ships with a proprietary base directive set — the result of thousands of training iterations across resolution, material, and scene diversity datasets. These base directives encode the core reconstruction behaviour and are what give BVX its out-of-the-box quality floor.
The Directive toggle in the toolbar exposes an optional one-line input that layers a creative constraint on top of the base algorithm at inference time. This lets you steer the reconstruction toward a specific aesthetic without losing the technical precision baked into the base model.
Examples of effective creative directives:
- "enhance the sunset colour grading" — BVX4 reconstruction biased toward amplified warm colour depth
- "cinematic colour grade, teal and orange" — ReSkin output conditioned on a specific tonal palette
- "extend into a foggy forest environment" — Expand continuation constrained to a specific environment type
- "preserve the vintage film grain aesthetic" — signals the pipeline to retain analogue texture characteristics rather than resolving them to clean digital sharpness
The Before/After Compare Slider
Every BVX run produces a side-by-side comparison slider — the same interaction pattern used inside Vision Studio's canvas editing popup.
Drag the handle to reveal the original on the left and the BVX output on the right. The labels at the bottom corners identify which version is which, and are colour-coded by mode:
- BVX4 4X Ultra — cyan
- BVX8 ReSkin — pink
- Expanded — indigo
When you're satisfied with the result, the Download button in the header exports the enhanced image directly to your device.
How the BVX Upscaler Fits Into Your Workflow
BVX is not just a standalone tool — it's a natural extension of your full VeyoLabs production pipeline:
Step 1: Generate in Vision Studio — get your composition, character, and scene right at generation resolution.
Step 2: Open the Image Upscaler from your dashboard sidebar (under AI Tools → Image Upscaler).
Step 3: Upload the generation. Choose your BVX mode and model. Run.
Step 4: Compare the result, optionally re-run with a custom creative directive, download.
Step 5: Use the enhanced output as:
- A final deliverable for a brand client
- A reference image for further generations in the canvas
- A large-format print asset
- A Veyo TV thumbnail or cover image
- A social content export at true high resolution
Technical Specifications
| Feature | BVX4 Upscale | BVX8 ReSkin | Expand |
|---|---|---|---|
| Output resolution | True 16K equivalent | True 16K equivalent | 4× expanded canvas |
| Subject preservation | 100% geometry locked | 100% identity locked | 100% original preserved |
| Consistency strength | 0.9 | 0.8 | 0.8 |
| Max input size | 75 MB | 75 MB | 75 MB |
| Accepted formats | PNG, JPG, WEBP | PNG, JPG, WEBP | PNG, JPG, WEBP |
| Creative directive support | Yes | Yes | Yes |
| Model options | BV4X / BV8X | BV4X / BV8X | BV4X / BV8X |
Access
The BVX Image Upscaler is available to all VeyoLabs Pro subscribers. Log in and navigate to Dashboard → AI Tools → Image Upscaler, or go directly to veyolabs.com/upscaler.
If you're not yet on a Pro plan, the 3-day free trial includes full access to the upscaler — no credit card required to start.
What's Next for BVX
This release is the standalone tool. Coming next:
- BVX Region Mode — Apply BVX4 or BVX8 to a specific painted or selected region of an image, rather than the full canvas
- BVX Video — Frame-consistent enhancement across video sequences, matching BVX4's quality to individual frames while preserving temporal coherence
- BVX Batch — Queue multiple images and run the same BVX mode across all of them in a single session
We build in public. Follow the blog to catch each release as it ships.