mike_hearn 21 hours ago

Stats based query planning is really useful for ad-hoc queries where you don't care exactly how fast it runs, you just want it to be as fast as possible with as little thought put into the query as possible. Given that originally SQL was meant to be used by humans, its dependence on planners makes sense.

For queries issued by machines where predictability is the most important thing, it probably makes sense to hard-code plans or at least pin a plan so it can't change out from underneath you at midnight in production. I'm not sure about Postgres, but you can do this on Oracle DB. There's probably a better way to express schemas in which both indexing and plans can be constructed ahead of time in many cases where the developer already knows the likely distribution of data, and not having performance cliffs is more important than the convenience of the planner. Example:

@Entity class SomeTable { @Mostly("SUCCESSFUL", "FAILED") SomeEnum row; }

and then the DB mapper - not necessarily an ORM - could provide the query hints needed to force the DB onto a reasonable plan for the annotated distribution.

  • williamdclt 21 hours ago

    > I'm not sure about Postgres

    Not possible. It’s explicitly a non-goal, sub-optimal plans are considered a bug to be fixed, you can’t even force the use of an index.

    To their credit, the Postgres query planner is amazing and does generally work very well that you don’t need forcing indexes or plans. But that’s little comfort when it does not and you have a production incident on your hands

    Edit: I think RDS Aurora Postgres does let you have managed query plans

    • barrkel 19 hours ago

      It's not really true. There are many session-scoped flags you can set which influence planner behavior, and sometimes toggling them for a specific query is what gets you through. More often, judicious use of CTEs as an optimization barrier (materialized CTE in moderrn versions) is useful to force execution order.

      I like that Postgres has a lot of tools at its disposal for executing queries. But you can't rely on it for data even with modest scale. It's rare that it does a good job with interesting queries.

      • williamdclt 18 hours ago

        I think we’re saying the same thing: you can’t force PG to a plan or an index, you can only change your query in hope it gets the hint.

        Even session scoped flags are a really coarse tool, don’t guarantee you’ll get the plan you want and it might unexpectedly impact other queries in the session.

        Materialised CTE are one of the only tools that give real control, but an optimisation barrier is often the opposite of what you want

    • atsjie 12 hours ago

      > the Postgres query planner is amazing

      Let's agree to disagree.

      It's becoming too complex, too unpredictable. Every query and variable becomes a surprise in production. It's too smart for its own good.

      • williamdclt 11 hours ago

        We don’t entirely disagree tbh. But having to work with MySQL now, I find I’m more surprised by how bad it is than I am by Postgres being too smart for its own good. I dont love everything about Postgres at all, but I always end up thinking it’s the least bad option

bob1029 21 hours ago

> Is there a plausible way to improve this?

Yes. There are techniques in high end RDBMS like adaptive query processing which allow for the engine to update its approach during execution.

https://learn.microsoft.com/en-us/sql/relational-databases/p...

https://www.ibm.com/docs/en/i/7.6.0?topic=overview-adaptive-...

https://apex.oracle.com/pls/apex/features/r/dbfeatures/featu...

  • to11mtm 15 hours ago

    After years of watching DBAs, 'Rerun Stats' is the equivalent of 'hitting the space station with a wrench' for an Oracle DB lol.

remus a day ago

An interesting article, but I disagree with the first sentence.

> The basic promise of a query optimizer is that it picks the “optimal” query plan.

This is kind of true, but the optimiser has to do all this with very tight constraints on available time and resources. A planner that returned a perfect plan every time would be useless if it took 500ms and 500mb, so I'd say a better phrasing would be

> The basic promise of a query optimizer is that it picks a good plan most of the time and does it quickly and efficiently.

  • marcosdumay 20 hours ago

    > The basic promise of a query optimizer is that it picks a good plan most of the time and does it quickly and efficiently.

    Hum, no. The basic promise of a query optimizer is that it picks a good plan all of the time. Otherwise it's worse than useless and you would be better with a DB where you can pin the plan for every query.

    But yes, the goal is on "good", not "optimal".

    • RaftPeople 18 hours ago

      > The basic promise of a query optimizer is that it picks a good plan all of the time.

      The poster you responded to is correct, it's a combinatorial problem that can't be expected to pick a good plan all of the time within the normal data and time constraints.

  • wat10000 21 hours ago

    I imagine that finding the optimal query plan would itself be at least NP-complete in the worst case, so you’ll definitely want to settle for “good” rather than optimal.

    • Sesse__ 3 hours ago

      Under fairly reasonable circumstances, finding the optimal join order for N tables in an arbitrary query graph is indeed shown to be NP-hard. However, many common queries are not so difficult, e.g. if you just join A to B, B to C, C to D etc. (a chain join) and allow joining only along join conditions, it's O(n³). But e.g. a star join (A to B, A to C, A to D, etc.) can become much worse IIRC.

to11mtm 15 hours ago

Optimal Query plans are often a bit of voodoo dependent on the underlying DB.

That said, I've certainly seen anti patterns in querying; my favorite being folks who bolt an ORM on, don't bother to define mapping properly, and suddenly everything tanks because reasons[0]

But also, I've found over time that for DB's that get a lot of 'specific' traffic, frankly it's nice to abuse an ORM's behavior of treating certain things as literals instead of parameterizing them.

[0] - Main examples that come to mind are parameter mismatches, either sending unicode for what's non unicode in the DB, or having a default length based on the column size but the column size is unspecified in mapping so it treats it as VARCHAR(MAX). Both of these can break index usage in DBs.

alexisread 5 days ago

TBH this is a good case for Linq-style queries, over SQL-based ones. Linq style allows you to narrow the query deterministically vs Sql eg. Mytable1.select().where().join(mytable2)

This applies the filter before the join, compared with:

Select from mytable1 join mytable2 on... Where...

Which relies on the query plan to resolve the correct application of where, and hope it puts the where before the join.

This becomes more important in the cloud, where Iceberg tables cannot be benchmarked in the same way as single-cluster relational DBs.

  • hobofan a day ago

    Both SQL and Linq-style queries end up in the same in-memory representation once they hit the query engine/query planner.

    Filter pushdown ("correct application of where, and hope it puts the where before the join") is table stakes for a query planner and has been for decades.

    And no, Iceberg tables are not special in any way here. Iceberg tables contain data statistics, just like the ones described in the article to make the optimizer choose the right query plan.

    • Sesse__ 21 hours ago

      Filter pushdown is surprisingly subtle, when you start getting into issues of non-inner joins and multiple equalities. I had no idea until I had to implement it myself. (It's also not always optimal, although I don't offhand know of any optimizer that understands the relevant edge cases.)

  • brudgers 2 days ago

    When you are willing to pay for your RDBMS, you are likely to get an optimizer that does algebra for you (and does algebra pretty well).

    On the other hand, if you are hand coding the query, you are hand coding whether you use SQL, Linq, or anything else. And a strength of SQL is a robust ecosystem of documentation, tools, employment candidates, and consultants.

    • gigatexal a day ago

      exactly, one often gets what they pay for.

      databases are complicated beasts and tens of thousands if not more working years of the smartest people have been working on them since the 70s

      • hyperpape 21 hours ago

        > databases are complicated beasts and tens of thousands if not more working years of the smartest people have been working on them since the 70s

        Yes, but they've also set themselves a much harder problem. Instead of "find the proper execution for this particular query, armed with all the knowledge of the database you have" they have the problem of "find the proper execution for all possible queries, using only information that's encoded in DB statistics."

        So it's no surprise that it's quite easy to find places the optimizer does silly things. To take one instance that bit me recently: or clauses representing different tables cannot be optimized well at all in postgres (https://www.cybertec-postgresql.com/en/avoid-or-for-better-p...).

        I'd still rather have optimizers than not--the optimizer does well most of the time, and my current work has far too many queries to hand-optimize. But you do end up seeing a lot of poorly optimized queries.

        • gigatexal 17 hours ago

          That’s right. The better one is able to help the optimizer the better life is. It’s an art not a science.

  • adamzochowski a day ago

    This depends on the join type.

        select ...
        from table1 
            left join mytable2 on ... 
        where ..
    
    If you move contents of where clause to the join/on clause, you will change meaning of the query.

    If someone has a complex query or complex performance, they could do either subselects

        select ...
        from ( select ... from table1 where ... ) as table1_filtered
            inner join mytable2 on ... 
    
    
    or CTEs

        with table1_filtered as ( select ... from table1 where ... ) 
        select ...
        from table1_filtered
            inner join mytable2 on ...
  • gigatexal a day ago

    but SQL execution even in a dumb planner is FROM then WHERE then .... and then finally SELECT and ORDER

    so I don't get it

    • gonzalohm a day ago

      But a smart query planner would move the where condition into the JOIN condition

      • Sesse__ a day ago

        Depending on the join type, moving it up into the join (instead of keeping it on the table) would be either irrelevant or actively harmful.