S3E2 - DSA tests are ineffective. There's a better way to do Tech Hiring!

Season 3, Episode 2 - Listen Here

Show Notes:

Nearly anyone you talk to in technology will agree that tech-hiring is broken. There are many reasons for this, probably the most controversial is the usage of DSA-style problems to test a candidates technical ability.

Today we'll talk about what DSA tests do, and don't, cover. We'll also discuss a better approach, that's a new twist on an existing method...


Show Script:

Nearly anyone you talk to in technology will agree that tech-hiring is broken. There are many reasons for this, some more important or controversial than others. Probably the most controversial of which is the usage of DSA-style problems.

DSA stands for Data Structures and Algorithms, and these types of test were popularised by platforms like Leetcode and hackerrank. The idea being, that these are the core fundamentals of Computer Science, and that if a candidate could work out how to solve these in the most optimal manner, then they're a guaranteed good-hire for a Software Developer or Software Engineer job.

Tech Businesses love this, as they can effectively outsource the responsibility of testing candidates technical skills. All of the testing and scoring happens in these online platforms like LeetCode, CoderByte, etc... and the only thing they need to do is select a difficulty level, send an invitation link, and later review the results. Results come in X-correct-out-of-Y-questions format, and often show the Big-O time complexity of the solution as well. Neat!

From an organization perspective, this is efficient and low cost. It frees up their own devs from the distraction of hiring, and effectively offloads the work and responsibility to another entity. DSA solutions are fairly standardised across languages (which is rare for tech to agree upon, well anything!) so the universality of the testing is comforting for hiring managers of all levels.

The problem with using these types of challenges to assess candidates, is that they hyper-optimise for a very small part of the Software Development life-cycle, and definitely not the most important parts!

Things we know it tests for:

  • Data structures

    • Arrays

    • Graphs

    • Heaps

    • Linked Lists

    • Maps

    • Queues

    • Stacks

    • Trees

  • Algorithms

    • Sorting

    • Filter

    • Traversal

  • Concepts

    • Graph Theory

      • Trees – weighted, unweighted, directed, rooted, unrooted, etc.
    • Math

      • Linear Algebra

      • Computation Geometry

      • Combinatorics,

      • Number Theory

    • String Processing

    • Bit Manipulation

That sounds great right! All pertinent parts of the craft. Unfortunately, in reality it predominantly tests for Memorization. We like to think they test for people who can solve challenges, but the stark reality is that it's basically a 'learn-by-heart' memory game. There's a lot of nuance missing from real-world Software Engineering that doesn't get seen or assessed.

An example of what Leetcode style problems don’t test for:

  • Use of the DevTools

    • majority of these tests are on centralised platforms and it’s required to use their embedded tooling.

    • Which means candidates can't use their IDE and demonstrate knowledge of :

  • Correct selection/usages of Design Patterns

    • Pretty much all of GOF, etc...

    • Generics

    • Inheritance

  • Clear application/conformance to Application / Architectural patterns

    • MVVM

    • MVC

    • Hexagonal

    • N-Tier

  • Software Reuse

    • Why are we constantly reinventing the wheel?

      • They often want candidates to do everything from scratch, so bypasses usage of target framework or language's standard library, etc.. for items like sorting, etc.
  • Data validation

    • Nearly all leet-code style problems give you valid data or ask a candidate to make some fairly positive assumptions about inputs

    • No real-word use-cases of validation

      • Filter out bad values?

      • Exit out if even one is bad?

      • What determines bad?

        • Invalid value, not within a certain range, etc...?
      • There's no argument validation or null-checking at all

  • Error Handling

    • I.e. What happens when it goes wrong?

    • You don't see many try/catch blocks in algorithmic challenges!

  • Code Style, Commenting, Documentation

    • Can't name all of your variables and functions as a, b, c, x, y, z, foo or bar!
  • Problem Domain Knowledge

    • If you're looking for front-end developers, these don’t test knowledge of Components, Web standards, MVVM, etc...

    • If you're looking for Backend API developers, these don't test their knowledge of REST, GraphQL, GRPC, SOAP or similar

    • Don't test for industry standards like Model-mapping, Dependency Injection, usages of interfaces or generics, etc..

  • Product Domain Knowledge

    • If you store any user data at all, having someone who knows about GDPR, Consent, etc... is invaluable.

    • If you're a finance company, having someone who knows about the right way to log and report on user actions for auditing purposes can save you millions in fines.

    • Compliance is key!

  • Ability to work on existing codebases

    • Not every piece of code needs to be new. When a candidate joins a company, it's the norm to share a codebase (which could be MANY years old!), conform to coding style guides and generally work with and in an existing team.

Now, people will jump to the defence of these platforms and say “well, you actually can do that if you create custom questions or projects, etc.”. And that's all well and good, but then you’re writing your own questions and having to review and correct those yourself. You’re then just paying for a platform to deliver those instead of doing the testing for you. It also doesn’t fix the issue with not being able to use standard dev-tools or demonstrate product or problem domain knowledge.

A lot of people disagree with this stance, and y'know what – good for them. That's fine! Let's agree to disagree and say "different horses for different courses"!

To my mind though, there is something wrong if people are coming out of 4 year degrees or an additional Masters and then have to study DSA for 1-6 months until they can successfully interview…


Now that we've shed some light on why DSA tests aren’t great; let's talk about a better option. When we talk about a better, or more viable solution, let's talk about the characteristics that it would have :

First is that it tests for the things we want a candidate to be able to do.

  • If it's a Vue/Svelte/React job, then it tests for building components, applications, populating/editing application state

  • If it's a backend job, it should test for building an api, reading/persisting to a database, data validation, etc.

  • If it's full stack, maybe a webpage that connects to a backend and displays results in a novel way like charting, etc..

I won't list every possibility but think about testing what they're actually going to do as part of their day-to-day responsibilities. In every case, you want to think about code readability, maintainability, testing, separation of concerns, adherence to an architecture, and intent.

Next, we want them to be able to use the standard tooling for the job at hand. Instead of a crummy web-editor, wouldn't it be nice if they could use VSCode, Or Jetbrain Rider, etc. If they could also show us how good they are with git, etc.

Finally, we want them to be comfortable and produce their best work.

—-

So with these characteristics in mind, it's natural to think of either a live-coding test, or a take-home test. Rather than choose one, I'd actually like to propose a hybrid of both!

First when we think of software work, it's not normal to have to think on the spot, work under pressure of time, while someone inspects and questions your every move. This rules out just live-coding tests. Secondly, take-home tests can have issues with perhaps candidates being too inspired by other solutions. So, what I propose is that instead of giving them a prompt to build everything from scratch; we should give them a base project, and implement a feature within it. Here's the kicker though – give them the base project and feature request, and then give them a few days to research solutions. A week later, have them live-code the solution in a shared session.

WILD RIGHT?! It makes a lot of sense though, when you think about it.

  • It mimics real-world development so much more closely.

    • They'll be working with an established codebase, and need their solution to work within that. That means they can't compromise existing functionality. Existing unit tests still need to run, and they'll also need to make some of their own.
  • They'll have time to research or iterate upon a solution. No fire-and-forget dirty code!

  • They implement the final solution live, in order to demonstrate their thought process and understanding of what-goes-where.

    • No fear of copy-paste solutions
  • It removes performance pressure, and gives you a more accurate insight into their abilities.

Now, you're probably asking how this tests candidates? If they can research the solution beforehand, doesn't this defy the point. The answer is a resounding “NO”! Knowing what to research is the key here, and will absolutely clearly separate candidates.

For example, if you ask a candidate to turn an API based on a flat file into one based on a data store, there's a lot to consider. It's not just a case of wrapping up SQL lib in a wrapper and calling it a day!

  • What design pattern will they use? Repository pattern right? Will they sprinkle the unit of work pattern on top?

  • How should the data be modelled in the database, based on the domain models?

    • Is there normalizations or optimization that can be achieved?
  • Is an ORM involved? How does that affect the implementation?

  • Any 3rd party packages that can do all, or part, of the job? Do they need to be wrapped or extended?

  • Validation, Parameterizing, Security, Concurrency, Join optimization, etc. are all items to consider too.

This gives them a better opportunity to showcase their knowledge, because they can bring in all of their experience. It also gives the interviewer a much deeper insight into the candidate's knowledge and the value that they can bring. Win-win for everyone!

You'll actually be testing for real things in software development, and depending on their progress you can lead into great follow-on questions like how they'd improve, or how they'd deploy it, what production issues they'd see/anticipate, etc. Now you're testing for everything that Leetcode leaves out, and you're opening the door for incredible candidates whom you would have missed out on due to a bad day.

Tech-hiring may be broken, but creative solutions like this that blend current options and enable candidates to bring their best can make it that much better!

Did you find this article valuable?

Support Philip Gannon by becoming a sponsor. Any amount is appreciated!