It would be nice if I could get my ORLite performance boost without really changing the (undocumented) API. So I got one more idea: Do the slicing in the ORLite generated code. It’s available in a new branch on GitHub.
To benchmark all three solutions I used a variant of CPANDB::Dependecy::csv():
sub csv {
my $class = shift;
for my $edge ( $class->select ) {
my $foo = $edge->distribution . "\t" . $edge->dependency . "\n";
}
}
My::Plain->begin;
My::Unsliced->begin;
My::SelfSlice->begin;
cmpthese( -30, {
plain => sub { csv("My::Plain::Dependency") },
unsliced => sub { csv("My::Unsliced::Dependency") },
selfslice => sub { csv("My::SelfSlice::Dependency") },
});
Unfortunately it seems like both having DBI doing the slicing and doing it myself costs roughly the same:
Rate selfslice plain unsliced selfslice 1.61/s -- -1% -41% plain 1.64/s 2% -- -40% unsliced 2.71/s 68% 66% --
So I probably end up making some sort of ORLite subclass as Adam Kennedy suggested in a comment.