Image-Based Testing With HP UFT’s “Insight” Feature
- November 25, 2025
- admin
Classic object recognition in HP UFT hits a wall when your application’s user interface doesn’t show clear properties. Although controls are displayed on the screen, UFT is unable to recognize them as legitimate objects.
HP UFT’s Image-Based Testing and Insight feature are key here. UFT looks at screen pixels to find visual matches. It doesn’t rely on technical traits. Using it correctly stops your automation from going back to manual testing. This keeps it focused on complex interfaces.
What Is Image-Based Testing in HP UFT?
Image-based object identification in HP UFT / UFT One is called Insight. A small screenshot of the control is kept in the object repository. This replaces the need to describe a test object with attributes like HTML ID, title, or ClassName. UFT uses an internal similarity threshold to find matches on the screen. During runtime, it searches for areas that look like the stored image.
This type of test object is frequently referred to as an InsightObject or Insight test object. You can click, type, drag, or use it in checkpoints and output values. It works like any other GUI test object in UFT. The key difference is that instead of a list of usual features, the description mainly includes the image. Sometimes, it has an ordinal identifier or shows how it relates to nearby objects.
This approach is especially useful for:
- Legacy applications that render controls in non-standard ways.
- Remote desktops or Citrix / terminal sessions where UFT has only a bitmap to work with.
- Custom-drawn widgets, charts, or canvas elements that don’t expose stable technical attributes.
When standard property-based recognition is insufficient in those circumstances, image-based recognition emerges as a useful insight feature that UFT provides.
When to Use the Insight Feature Instead of Standard Object Recognition
We advise against using Insight everywhere, even if you have it at your disposal. Property-based recognition remains the standard for most GUI testing automation. It works well because it supports the whole object model. It’s also more resistant to layout changes. Your safety net for the challenging UI elements is Insight.
You and your team should consider Insight when:
- UFT can’t identify a control at all, even after tuning add-ins and using the object spy.
- The same logical object exposes unstable properties across builds or environments.
- You’re testing through a remote session where UFT only sees a flat screen image.
- You need to validate highly visual elements such as icons, charts, or color states.
However, when a control jumps around the screen from run to run or when the user interface already exposes solid properties and hierarchy, Insight is a bad fit. In those situations, property-based recognition usually gives better maintainability and more Object Recognition Alternatives (for example, regular expressions on attributes or visual relation identifiers).
Insight is not a substitute for standard object models, rather, we view it as an additional tool in our test automation services that can be useful when the technology stack is against you.
Setting Up Image-Based Testing with HP UFT’s Insight Feature
We begin by configuring HP UFT and the application and making sure that captured images remain consistent and significant throughout runs in order to make Image-Based Testing with HP UFT productive.
Preparing the Application and UFT
When capturing any Insight objects, we suggest:
- Because Insight compares bitmaps and even slight scaling adjustments can result in differences, set up the screen resolution and DPI scaling across your test computers.
- Maintaining consistency in the test environment’s language packs, color schemes, and themes.
- Exploring the application until the target controls are completely visible and devoid of popups and tooltips.
- Verifying that insight learning and recording are turned on in UFT’s GUI Testing settings.
These basics reduce the possibility that your first batch of images will need to be recaptured right away and are consistent with the planned setup we suggest in our automation testing best practices.
Capturing Insight Objects
With the exception of choosing a section of the screen rather than a named control, capturing an Insight object feels like learning a regular object.
A typical flow is:
- Either begin recording or select to add a new object in Insight mode by opening the object repository.
- Move the crosshair over the application and draw a rectangle around the visual element you want to identify, such as an icon, button, menu item, or cell in a custom grid.
- Cut the capture so that the entire toolbar or panel is gone, leaving only the distinctive components of the control.
- To keep your scripts readable, give the test object in the repository a clear, useful name.
In the background, UFT attaches description properties like similarity and insight (the bitmap) to that image along with the test object. Based on those properties, UFT looks for the best visual match on the current screen during runtime.
Building and Running Insight-Based Scripts
You use your Insight objects in UFT just like you would any other object once they are in the repository. A straightforward login process might involve clicking an Insight-based button, entering your username in an Insight-based field, and then clicking a submit icon. In the same action or test, you can combine these steps with common object-based steps.
We use a hybrid approach by default on client projects, using classic UFT objects for stability and ease of maintenance throughout the flow and Insight objects only when normal recognition is weak. By placing Insight steps inside reusable actions or function libraries that are compatible with your framework, you can increase reuse while maintaining the standard parameterization of test data.
Designing Stable Insight-Based Tests (Best Practices)
Since Insight relies on images, test stability is highly dependent on the quality and uniqueness of those images. We have gathered patterns over time that significantly increase the reliability of Insight-based tests.
Choosing Clear and Unique Images
We try to capture only the pixels that are truly part of the control. The match could be broken by any change to the pixels in a sizable region that includes nearby controls or dynamic content.
Good patterns include keeping the capture rectangle as tight as possible without missing important borders, avoiding dynamic text like timestamps unless it’s exactly what you need to confirm, and concentrating on the icon or label that separates the control. When a fragile Insight object appears on the screen more than once, UFT can stabilize it for daily runs by using ordinal or visual relation identifiers to choose the correct icon.
Working with Screen Resolution and Scaling
Because Insight compares bitmaps, differences in resolution, aspect ratio, or DPI scaling can cause “object not found” errors even if the UI looks correct to a human tester. We typically:
- Standardize a resolution and scaling configuration for all UFT machines used in pipelines.
- Avoid running the same Insight-heavy test across many display configurations.
- When necessary, tune the similarity threshold so small pixel shifts or anti-aliasing differences don’t break the match.
This is one area where Visual Testing with UFT behaves like other screenshot-driven tools: environment consistency matters just as much as script design.
Combining Insight with Standard Object Models
We rarely treat Insight as a standalone framework. Instead, we mix Insight objects into existing automation design:
- Navigation and structural checks use standard UFT objects wherever properties are reliable.
- Insight is necessary for sections that are highly visual or involve remote sessions.
- The same procedures apply to data handling, logging, and integration with continuous testing flows.
You can keep your object repository manageable and reduce the number of Insight images you need to maintain as the product evolves by following scalable script practices, such as our work on scalable automated test scripts.
Managing Maintenance and Dynamic UI Changes
Even with careful design, visual changes in the user interface may cause insight-based tests to fail. Maybe a designer updated the icon set, rearranged a toolbar, or altered the colors for accessibility.
Typical symptoms include “object not found” errors for previously stable Insight objects, UFT clicking the wrong instance of a repeated icon, or tests that pass in one environment but fail in another with a different theme.
In order to manage maintenance, we typically:
- When there is a conscious visual redesign, recapture the images and maintain a brief record of the Insight objects that were modified.
- To review them all at once following a UI change, group related Insight objects (for instance, all of the icons from a custom toolbar).
- When the basic shape remains the same but there is a subtle visual change, like a color shade or anti-aliasing difference, adjust the similarity threshold.
- In the event that the primary image cannot be found, scripts should include backup logic, such as trying a second Insight object or switching to a shortcut.
When used properly, Insight transforms from a one-time fix that everyone is scared to use to a manageable component of your regression packs. This is especially true when combined with the long-term maintenance mindset that we outline in our performance and automation content.
Bringing Image-Based Testing into Your Automation Plan
“Let’s automate everything with Insight” is rarely how we start real projects. Instead, we incorporate Image-Based Testing with HP UFT into an already-existing approach that consists of unit testing, API testing, and conventional GUI automation.
A practical pattern is to:
- Use standard UFT objects for most functional coverage.
- Add Insight-based checks around complex dashboards, custom graphics, or embedded remote sessions.
- Feed results into the same reporting and analytics pipeline, so your team sees one coherent picture of quality.
You can use environments that match your production resolution and themes. This helps run insight-heavy tests in targeted smoke packs during CI/CD. In this manner, you can identify rendering or theming problems early without slowing down property-based checks, which fits in well with our observations about continuous testing in CI/CD.
When those tests reveal performance issues like frozen widgets or delayed rendering, they complement the work you do with our performance testing services and related practices in areas like thread management in performance testing and response time vs throughput analysis.
If your team needs help figuring out where Insight should be integrated into your test design, integrating it with other test automation services, or matching it with your performance and regression strategy, we can work together with you to develop a configuration that works for your applications and delivery model.