ROLE
Product Designer
COLLABORATORS
Sam Stolzoff
Ross Pearlman
Sarah Cordell
Devin Toth
SKILLS
Interaction Design
Prototyping
Accessibility
Branding
THE OUTCOME
Better brand presence, usability, & accessibility, allowing for an increase in sales.
THE SUMMARY
What is HaloCortex?
THE PAIN POINT
It's a ball, it's a plane—no, it's... a flaming hedgehog?

COMPETITIVE ANALYSIS
I examined three major companies in the data analytics space—Scale AI’s Defense Llama, Meta’s Llama, and Palo Alto Networks’ Cortex Xpanse—focusing on how they presented data, highlighted their LLM products, and catered to the needs of security analysts.
Scale AI
Defense Llama
Meta
Llama

Palo Alto Networks Cortex Xpanse

UX RESEARCH
I needed to understand the two major audiences – an operator’s workflow when utilizing data analytic platforms and how business development would utilize the platform for situational awareness
I conducted various user interviews with security analysts, business development members, and proxy users to understand their perspectives on the challenges that occur in common workflows as well as how to balance cognitive overload with the presented data.
THE EVALUATION
A heuristic evaluation allowed for a thorough analysis of the platform to optimize timeliness & minimize design deficiencies
As the lead, I guided the intern through a heuristic evaluation of the software to assess usability. Drawing from earlier conversations with team analysts, I identified cognitive overload as a key risk. While the intern systematically applied the evaluation framework, I concentrated on uncovering design elements that could contribute to mental fatigue. This dual focus ensured we captured both broad usability issues and critical user challenges.
Aesthetic and Minimalist Design
1
Strong Visual Aesthetic, but Color Contrast Needs Attention
The blue/navy color palette works well throughout the design, but the text color choices could be improved to enhance readability and contrast.
Match Between System and the Real World
1
Unclear Terminology
While most wording is understandable, some terms lacked definitions, which could hinder user comprehension and confidence.
Visibility of System Status
1
Slow Feature Loading
Several features on the website took a long time to load, often resulting in no visible results.
2
Lack of Feedback During Load Times
The loading symbol provided no clear progress indication, causing users to feel the site is broken and potentially abandon their tasks.
Recognition Rather Than Recall
1
Inconsistent Iconography Causes Confusion
Icons were not used consistently, leading to user confusion about their meaning. Consistent icon design is critical for intuitive navigation and understanding.
2
Misuse of Familiar Icon Patterns
The use of star icons in the carousel conflicted with common expectations—users typically associate stars with “favorites.” This mismatch can mislead users and should be reconsidered.
Error Prevention
1
Inefficient Analytics and Lack of Feedback
A portion of the analytics took over 2 minutes to run and ultimately yielded no results, without providing any feedback or indication of progress—leading to potential user frustration.
2
High Risk of Input Errors
The filter results bar made it easy to submit typos, lacking safeguards or validation to help prevent input mistakes.
Help and Documentation
1
Enhancing Help Documentation and Support Features
Acknowledging the learning center in the side menu (currently not opening) and suggesting improvements for more accessible help.
2
Increase Clickable Help Icons for Easier Support
Adding more “i” or “?” icons throughout the interface to provide quick, accessible information and assistance.
User Control and Freedom
1
Limited Undo Options, But Basic Exits Were Available
There were few opportunities to undo or exit actions performed by the user. However, “close” buttons were provided to exit popups.
2
Overly Flexible Customization Needed Constraints
While customization of the bento box arrangement was allowed, the level of control—especially over box size—felt excessive. Adding some restrictions would help maintain usability.
Consistency and Standards
1
Visual and Interface Inconsistencies
While the tool generally followed industry standards, several UI elements lacked consistency. This included color usage, icon meaning (e.g., the “open external link” icon), and number formatting.
2
Standards Compliance with Room for Polish
The platform aligned well with similar OSINT tools in structure and features. However, small details like inconsistent visual cues and formatting standards needed refinement.
THE AUDIT
Breaking down the site architecture
SOLUTIONS
The impact/effort matrix I created to lead design decision-making and what efforts to prioritize in development under the tight constraints
SOLUTION – #1
Site headers redesigned to lower cognitive load & clean up UX hierarchy
Observing the drone pilots made it clear they didn’t understand the intended flow or which search tool to use. While the software met technical requirements and was built for power users, the UX lacked clear visual hierarchy and guidance for non-technical users. I also utilized Attention Insight to see where user focus was going when users landed on the page. Not surprisingly, attention and focus were lost through the page and it had a low amount of clarity.
1
Congregate the search system into one element where users could input their own filters while still benefiting from the analyst-curated keywords in the backend. This maintained analyst control while making the experience seamless for everyday users.
2
Move the analyst keyword search element to a less prominent area, such as a Settings sub-page. This would allow for general users to focus on their immediate search needs without any confusion or change to the keywords shaping their data feed.
SOLUTION – #2
Reducing cognitive load & unnecessary friction within the Time Window filter
The “Time Window” feature was designed to help users refine searches by specific dates and time ranges. However, the interface overwhelmed users with too many options, presented in a non-intuitive format. Military time overlapped visually, selected values weren’t clearly distinguished, and the UI didn’t offer dynamic feedback, leaving users unsure whether their selections were successfully registered. The overall experience created cognitive overload and introduced unnecessary friction during what should have been a straightforward filtering task.
When testing the window with analysts, I observed confusion–hesitation, frequently resetting filters, etc. Because there was a lack of an immediate visual response to inputs, it made the interaction feel unreliable.The issue was two pronged– a lack of clear interaction feedback and clutter.
To alleviate cognitive overload and create more consistent behavior across the platform, I focused on simplifying the Time Window component by reducing redundancy, addressing unresponsive interactions, and improving clarity. I consolidated the military and AM/PM formats into a single military time option, aligning with our primary user base of military and military-adjacent personnel. I also reduced the number of presented options and repositioned them to the left side of the panel to make them more immediately visible during the scanning process. Finally, I introduced dynamic feedback to reassure users that their selections had been successfully applied.
SOLUTION – #3
Fixing the network graph for increased usability
Throughout this project, I managed and mentored a design intern who executed the visual design, while I provided strategic direction, defined the UX architecture, and led the review cycles. As we curated high-impact, medium-lift solutions, I identified key usability issues with the network graph and guided the intern to focus on this specific area.
To help narrow the scope of their solution, I directed their design plan to prioritize usability enhancements. Based on that direction, the intern explored features such as hover interactions, direct navigation routes to user profiles, and contextual information embedded within the graph. I also identified the need to streamline and clarify the filter system. I continuously reviewed their wireframes throughout the development process, provided actionable feedback, and supported the creation of final prototypes in Figma.
IMPLEMENTATION DIFFERENCES – #1
Collaborating with developers on the time window resulted in a more aligned and cohesive design
At the end of the project, the budget only allowed for minimal design QA on my recommendations, with the time window element being one of the few components I was able to support through development. I helped guide the creation of the shortcuts, input fields, and interactions within each selection component, but the available funding ran out shortly after.
IMPLEMENTATION DIFFERENCES – #2
Cognitive overload still persistent, now with some added confusion
Although the team initially agreed on the need to simplify the UI headers, after the project’s completion they ultimately chose to retain most of the existing design, incorporating only a few small adjustments I recommended — specifically, separating the analyst search component from the common user search. Under my suggestion, they also prioritized the LLM feature by moving it to a more prominent position in the header, rather than leaving it as the last element.
However, the team and developers did not take the additional step of clarifying the feature's purpose. As a result, users land on the page without a clear understanding of what the feature does. I would have recommended conducting usability tests to determine the most effective approach for users, as well as further clarifying the LLM feature by renaming it and introducing a supporting informational banner to enhance the landing experience.
IMPLEMENTATION DIFFERENCES – #3
Hover ability to preview information was scratched
Although the side menu was utilized in the final design, the team did not have the bandwidth to enact the hover capability on the individual nodes.
THE HANDOFF
To support the development process, I adapted the UX recommendations into Github-friendly tickets
When presenting my recommendations to the HaloCortex team, the response was overwhelmingly positive, affirming the need for these changes. I prepared handoff documentation to support the redesigns, ensuring a smooth transition for the development team.
To further support the project, I collaborated with the graphic designer to create a typeface guide, update icons, and provide general recommendations on color associations. I also engaged in QA/QC processes to validate their updated designs and ensure they aligned with company branding.
LESSONS LEARNED
This project reinforced the importance of strategic prioritization, cross-functional collaboration, and user-centered design in enterprise AI products
This was one of the first design projects that I led with an intern under my wing and I learned quickly how sometimes one has to focus on high-impact, low-effort design changes to ensure meaningful improvements within tight constraints. Ultimately, navigating the dynamic of working across cross-functional teams and executive buy-in was key to driving design changes at scale, especially as the only woman in the room at times.
AI can be perceived as a black box. Educating users about how AI-driven insights are generated fosters trust and credibility, even in the smallest tooltip or informational modal.
If I had stayed on the project, I would have tracked:
Task Completion Rate: Measure how long it takes for users to complete tasks before and after the redesign to assess whether the UI changes improved user efficiency.
User Interaction Rates: Track click-through rates (CTR) or engagement with the redesigned headers and filters to see if the changes made the interface more intuitive.
Filter Usage Frequency: Track how often users engage with the network graph filters to gauge their usefulness and adoption.
WCAG Compliance Score: Measure the redesign’s adherence to web content accessibility guidelines, particularly with color contrast and readability, to ensure it improved accessibility.
User Feedback/Surveys: Collect direct feedback from users on the redesigned elements, focusing on ease of use and satisfaction with the changes.
Tracking these metrics would have provided valuable insights into how effective the redesigned UI was in meeting user needs. It would have also allowed me to make data-driven recommendations for further refinements based on real user behavior.