Algorithms and Data Structures Prep Guide for Technical Interviews and AIEH Assessments
Algorithms and data structures (DSA) are the most-tested topic in technical interviews at established tech employers and one of the more contested topics in modern hiring practice — some employers test them heavily despite weak job-relevance for many roles, others have de-emphasized them in favor of role-realistic work samples. Either way, candidates targeting roles at companies that test DSA need preparation; this guide covers the canonical material at the depth expected for those interviews and grounds the cognitive-reasoning skill the AIEH Cognitive Reasoning assessment probes.
This guide is organized around the patterns that recur most often in technical interviews and production work, not around exhaustive coverage of every algorithm. It’s calibrated for first-time learners building toward interview-ready competence and experienced practitioners refreshing specific dimensions before a high-stakes assessment.
Data Notice: Algorithm complexity bounds described here reflect the canonical analysis from standard CS textbooks (Cormen et al CLRS, Sedgewick & Wayne); production performance can deviate substantially from theoretical bounds due to cache behavior, constant factors, and input-distribution assumptions. Use theoretical bounds as a starting point, not a final answer.
Who this guide is for
Three reader profiles benefit from this guide:
- Candidates preparing for coding interviews at established tech employers (FAANG, established startups, financial-tech, some traditional enterprises). DSA-style coding interviews remain the dominant technical-screen format despite mixed evidence on job-relevance.
- Candidates preparing for the AIEH Cognitive Reasoning assessment. The full Cognitive Reasoning assessment probes abstract reasoning, pattern recognition, and structured problem-solving — skills that overlap substantially with the cognitive demands of DSA problems even though the assessment format is different. The free 5-question Cognitive Reasoning sample is takeable today.
- Working engineers refreshing DSA fluency. Many engineers haven’t actively used canonical DSA topics since the last interview cycle; refreshing before a job change is a common need.
The complexity-analysis framework
Algorithm complexity is described using Big O notation, expressing how runtime or memory grows with input size. The canonical complexity classes from fastest to slowest:
- O(1) — constant time. Operations that don’t depend on input size. Hash table lookup (average case), array indexing, stack push/pop.
- O(log n) — logarithmic. Each step divides the problem in half. Binary search, balanced binary tree operations.
- O(n) — linear. Each input element processed a constant number of times. Linear search, single-pass aggregation.
- O(n log n) — log-linear. Most efficient comparison-based sorts (merge sort, heapsort, randomized quicksort).
- O(n²) — quadratic. Nested iteration over input. Bubble sort, insertion sort, naive duplicate detection.
- O(2ⁿ), O(n!) — exponential and factorial. Brute-force recursion without memoization, permutation generation. Fast to write; slow to run on anything but small inputs.
Practitioners need to recognize complexity at a glance and explain the analysis. “This is O(n log n) because we sort once and then do O(n) work per element” should be reflexive.
Core data structures
Six data structures cover essentially all interview problems:
- Arrays. Contiguous memory, O(1) indexing, O(n) insertion in the middle. The default data structure when problems involve sequences with random access.
- Hash tables (dictionaries, maps). Average O(1) insertion and lookup. The go-to structure for “have I seen this before?” patterns and frequency counting. Worst case is O(n) for adversarial inputs but rarely matters in practice.
- Linked lists. Nodes pointing to other nodes. O(1) insertion at known positions, O(n) traversal. Less common in production than interview problems suggest, but the pointer-manipulation patterns appear in tree and graph algorithms.
- Stacks and queues. LIFO and FIFO orderings. Stacks are central to recursion, parsing, and backtracking; queues are central to BFS traversal and producer-consumer patterns.
- Trees. Hierarchical structures, particularly binary search trees (sorted, supporting O(log n) operations when balanced) and heaps (priority queue implementations, O(log n) insert and extract-min/max).
- Graphs. Vertices and edges; the most general structure. Adjacency-list and adjacency-matrix representations have different time-and-space trade-offs.
A subset of common interview problems involves recognizing which data structure best fits a problem. “Find duplicates in linear time using O(n) space” is a hash-table signal; “find the kth largest element” is a heap signal; “process hierarchical data” is a tree signal.
Sorting and searching
Sorting and searching are interview classics:
- Comparison-based sorts with O(n log n) lower bound: merge sort (stable, O(n) extra space), quicksort (average O(n log n), worst O(n²) but rare with randomization), heapsort (in-place, O(n log n) worst case).
- Linear-time sorts for restricted inputs: counting sort (integers in bounded range), radix sort (multi-digit numbers), bucket sort (uniformly-distributed values).
- Binary search in a sorted array for O(log n) lookup. The pattern generalizes to “search for the boundary in a monotonic function” problems, which are common interview questions in disguised form.
Recognize that production code rarely implements sorts from scratch — language libraries provide good defaults. The value of knowing the algorithms is recognizing complexity bounds and choosing data structures that interact well with the library sort.
Recursion and dynamic programming
Recursion expresses problems in terms of smaller versions of themselves. The canonical pattern:
def factorial(n):
if n <= 1:
return 1
return n * factorial(n - 1)
Most recursive problems suffer from exponential redundant work when sub-problems repeat. Memoization caches results to avoid recomputation:
def fib(n, memo={}):
if n in memo:
return memo[n]
if n < 2:
return n
memo[n] = fib(n-1, memo) + fib(n-2, memo)
return memo[n]
Dynamic programming (DP) is the bottom-up variant of memoized recursion: build the solution table iteratively from base cases up. DP problems are the second-hardest interview category (after graph problems); recognizing the DP signal — overlapping sub-problems and optimal substructure — is the core skill.
Common DP problem patterns:
- Longest common subsequence / longest increasing subsequence. String and sequence comparison.
- Knapsack. Selection under capacity constraints.
- Edit distance. String similarity.
- Coin change. Counting ways to make change.
- Matrix path problems. Counting paths or finding minimum cost paths through a grid.
The pattern: identify the subproblem (what does dp[i][j] mean?), identify the recurrence (how does dp[i][j] build from smaller subproblems?), and identify the base case.
Graph algorithms
Graph problems are the most common “advanced” interview topic and the most consequential for production work in many domains:
- Breadth-first search (BFS). Level-by-level exploration using a queue; finds shortest paths in unweighted graphs.
- Depth-first search (DFS). Recursion or stack; useful for cycle detection, topological sort, connected components.
- Topological sort. Order vertices in a DAG such that all edges go from earlier to later. Used for build-system scheduling, course prerequisites, dependency resolution.
- Dijkstra’s algorithm. Shortest path in weighted graphs with non-negative weights. O((V + E) log V) with a binary heap.
- A*search. Dijkstra with a heuristic function; particularly important in pathfinding for games and navigation.
- Minimum spanning tree (Kruskal’s, Prim’s). Connect all vertices with minimum total edge weight.
- Union-find (disjoint set). Track connected components efficiently; near-O(1) amortized per operation with path compression.
Practitioners should recognize the graph signal in problem statements: “find the shortest path”, “is this network connected”, “schedule with dependencies”, “group these elements by transitive relationships.” The signal isn’t always explicit; some problems are graph problems in disguise.
Two pointers, sliding window, and other classic techniques
Several interview-classic techniques recur often enough to be worth pattern-recognizing:
- Two pointers. Walk two indices through a sequence, often from opposite ends. Useful for sorted-array problems (sum-to-target, palindromes) and string manipulation.
- Sliding window. Maintain a window of consecutive elements, expanding and contracting based on conditions. Useful for substring problems (longest substring without repeating, smallest subarray with sum at least K).
- Prefix sums. Precompute cumulative sums for O(1) range queries. Useful for “sum of subarray” problems.
- Monotonic stack. Maintain a stack with monotonic property; useful for “next greater element” problems and histogram-style problems.
- Backtracking. Build a solution incrementally and undo decisions that lead to dead ends. Useful for combinatorial problems (N-queens, Sudoku, permutation/combination generation).
Each technique solves a class of problems efficiently; the skill is recognizing which class a given problem belongs to.
Bit manipulation and other often-tested techniques
Several techniques appear less frequently than the core categories above but recur often enough to be worth knowing:
- Bit manipulation. XOR for finding the unique element in a sequence where every other element appears twice; bitwise AND for testing specific bits; left/right shifts for power-of-2 arithmetic. Bit manipulation problems are over-represented at some employers; recognize the canonical patterns even if you’d reach for a different tool in production.
- Trie data structure. Tree-of-prefixes structure useful for word-lookup problems (autocomplete, spell-check, longest-common-prefix). Tries appear in interview problems more than production work, but they’re a recurring enough pattern to recognize.
- Disjoint set / union-find. Mentioned briefly in the graph algorithms section; deserves separate recognition for problems involving “are these elements in the same group?” with dynamic merging. Near-O(1) amortized with path compression and union-by-rank.
- Reservoir sampling. Sample k elements uniformly from a stream of unknown length using O(k) memory. Niche but recurring in technical interviews and real-world streaming contexts.
- Floyd’s cycle detection (tortoise and hare). Detect cycles in linked lists or function-iteration sequences using two pointers moving at different speeds. The canonical application is detecting infinite loops in linked-list manipulation.
These techniques don’t have the breadth of applicability of the core categories but solving an interview problem cleanly with the right specialized technique signals depth that solving-with-general-tools doesn’t.
Common interview problem patterns
Six recurring problem patterns worth recognizing reflexively:
- “Find duplicates / first repeating / k most frequent.” Hash table for counting, heap for top-k.
- “Top-K elements.” Min-heap of size K (keep heap small).
- “Subarray sum equals K / longest subarray with property X.” Hashmap of prefix sums or sliding window depending on constraints.
- “Tree path / tree balance / tree diameter.” Recursion on the tree, returning information from children to combine at parent.
- “Schedule courses / build order / detect cycle.” Topological sort or graph DFS with cycle detection.
- “Find minimum cost path.” DP for grid problems, Dijkstra for general graphs, BFS for unweighted graphs.
Recognizing the pattern from the problem statement is the hardest skill — once you know it’s a top-K problem, the implementation follows.
When to use AI assistance well in DSA work
AI assistance is increasingly common in technical interviews that allow it (live coding sessions, take-home assessments that don’t restrict tooling). The discipline:
- AI is excellent at boilerplate. Standard implementations of well-known algorithms are AI-strong; the practitioner reviews for edge cases.
- AI is moderate at recognizing patterns. AI can suggest “this is a sliding-window problem” but the practitioner should verify before committing.
- AI is weak at non-standard variations. Interview problems often have twists that AI’s training-data-pattern- matching misses; the practitioner authors carefully and tests against edge cases.
The AIEH Cognitive Reasoning assessment specifically probes the underlying reasoning skills rather than algorithm-specific knowledge — strong reasoners tend to recognize patterns in both DSA problems and the cognitive-reasoning items the assessment uses, even though the surface format differs.
How this maps to AIEH assessments and roles
This guide grounds the cognitive-reasoning skills probed by AIEH’s Cognitive Reasoning assessment family (see the Cognitive Reasoning sample and the scoring methodology for how scores map onto the 300–850 Skills Passport scale).
For role-specific applications:
- Backend Engineer — DSA fluency is tested most heavily for backend roles where systems-engineering depth is the central skill; this guide covers the concept floor.
- Full-Stack Engineer — DSA fluency is tested less heavily but still appears at most established employers’ technical screens.
- ML Engineer — DSA fluency is tested moderately; ML-specific algorithm knowledge (gradient descent, backpropagation, common ML algorithms) is tested more heavily than canonical DSA.
- Mobile Engineer — DSA fluency is tested moderately; platform-specific knowledge often matters more than canonical DSA.
- Security Engineer — Lower DSA-test density than other engineering specialties; cryptographic primitives and attack-pattern reasoning are tested more directly.
Resources for deeper study
Three resources that reward sustained study:
- Introduction to Algorithms (CLRS) by Cormen, Leiserson, Rivest, and Stein. The canonical algorithms textbook; exhaustive but heavy. Best as a reference rather than a cover-to-cover read.
- Algorithms by Sedgewick & Wayne. More approachable than CLRS while covering the same canonical material; particularly strong on intuition and visualization. Available with Coursera companion course.
- LeetCode and HackerRank. Platform-based practice for pattern recognition. LeetCode’s problems-grouped-by-topic view is particularly useful for systematic preparation.
For interview-specific pattern practice, the “Cracking the Coding Interview” book (McDowell) and “Elements of Programming Interviews” (Aziz, Lee, Prakash) cover the common interview problem patterns with worked solutions.
Common pitfalls candidates fall into
Three patterns that recurring candidates fall into during DSA technical interviews:
- Memorizing solutions instead of understanding patterns. Memorization fails on novel-but-related problems; understanding the pattern transfers. Practice pattern-recognition by solving problems blind and reviewing the solution approach rather than the specific code.
- Skipping complexity analysis. Interviewers expect candidates to articulate time-and-space complexity. Solving the problem without explicit complexity discussion signals weak engineering judgment.
- Diving into code without testing on examples first. Strong candidates trace through the algorithm on a small example before writing code, catching off-by-one errors and edge cases at design time rather than debug time.
Takeaway
DSA fluency at interview depth involves recognizing common problem patterns reflexively, knowing which data structure fits which problem class, articulating complexity analysis, and applying canonical techniques (sliding window, two pointers, DP, graph algorithms) where appropriate. AI assistance helps with boilerplate but doesn’t substitute for pattern recognition or complexity-analysis discipline.
The AIEH Cognitive Reasoning assessment probes the underlying reasoning skills rather than algorithm-specific knowledge — practitioners with strong reasoning skills tend to recognize patterns in both DSA and the assessment’s item formats. This guide covers the canonical material; sustained practice against pattern recognition compounds the value.
For broader treatment of AIEH’s assessment approach, see the Cognitive Reasoning sample, the scoring methodology, and the cognitive-ability in hiring overview.
Sources
- Aziz, A., Lee, T., & Prakash, A. (2018). Elements of Programming Interviews (2nd ed.). EPI / CreateSpace.
- Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2022). Introduction to Algorithms (4th ed.). MIT Press.
- Knuth, D. E. (1997). The Art of Computer Programming, Volumes 1–4A. Addison-Wesley.
- McDowell, G. L. (2015). Cracking the Coding Interview (6th ed.). CareerCup.
- Sedgewick, R., & Wayne, K. (2011). Algorithms (4th ed.). Addison-Wesley.
- Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262–274.
About This Article
Researched and written by the AIEH editorial team using official sources. This article is for informational purposes only and does not constitute professional advice.
Last reviewed: · Editorial policy · Report an error