SQL working [ was Re: The sorry state of the D stack? ]

Piotr Szturmaj bncrbme at jadamspam.pl
Sun Oct 7 05:27:38 PDT 2012


Jacob Carlborg wrote:
> On 2012-10-07 10:55, Russel Winder wrote:
>
>> Why only PostgreSQL. Shouldn't it also work with MySQL, Oracle, DB2,
>> PervasiveSQL, SQLite3, etc.?
>>
>>  From the example I assume that this is just a library for managing
>> connections and that everything else is just string-based SQL
>> statements. Groovy's and Python's lowest level is roughly the same.
>> However on top of these are expression languages in Groovy / Python so
>> as to remove the reliance on string processing, i.e. use an internal DSL
>> to do all the SQL stuff. For Python this is SQLAlchemy, for Groovy it
>> will hopefully be GSQL. I am sure Scala and C++ have something similar?
>
> They do.
>
>> So I guess the question is how to ensure this all works with all SQL
>> systems and how to put an abstraction layer over this to avoid all the
>> error prone string manipulation?
>
> ActiveRecord in Ruby on Rails uses several layers to handle all database
> related functionality. At the highest level there's a DSL which allows
> you to write the SQL queries mostly in Ruby. Another library, ARel, is
> used by ActiveRecord to generate the SQL code from the DSL. ARel handles
> all the differences among all the supported databases. ARel then passes
> the SQL code back to ActiveRecord where a lower layer handles the
> connections to the database and performs the actual query.
>
> Then you have another layer that transforms the response into objects,
> sets up all the relations and so on.

Having distinct layers that don't know each other isn't always a good idea.

In my prostgres client one may specify field types at compile time. If I 
had divided the client into two separate layers it would return a 
Variant[] at first layer, then convert it to user specified tuple at the 
second. For example:

auto cmd = new SqlCommand(connection, "SELECT 1, 'abc'");
auto untypedRow = connection.executeRow(); // return DBRow!(Variant[])
auto typedRow = connection.executeRow!(int, string)(); // returns 
DBRow!(int, string);

Internally executeRow could always take a Variant[], then convert it to 
Tuple!(int, string), but it's suboptimal. Firstly, it must allocate an 
array of two Variants, then each Variant must be coerced to the 
corresponding type.

Instead, the client fills each of the tuple field directly as they come 
from the socket stream. With binary formatting all it has to do is 
swapEndian() on integers and floats (no parsing!). Of course, there's 
one allocation for the string, but if we change field type to char[4], 
there'll be no allocation at all.

Just wanted to illustrate that "layers" shouldn't always be separate.


More information about the Digitalmars-d mailing list