perlcperl - a perl5 with classes, types, compilable, company friendly


Description of changes and enhancements of the cperl variant of perl5.


init from a perl5 git dir:

    git remote add cp ssh://
    git fetch cp
    git checkout -t cp master

for rerere:

    git config --add rerere enable
    git config --add rerere.autoupdate true
    git submodule update --init
    ln -s ../.git-rr-cache .git/rr-cache

    git branch -r | grep cp/

We need a shared rerere cache to be able to continously merge and rebase with perl5 upstream and our branch progress. See The commits with git-rr-cache and cp-rb in the Subject can be safely ignored upstream.

All branches are frequently rebased. Use the provided helpers cp-rb, cp-rbi, cp-lb and cp-rh.

The fastest performance unstable branches are currently feature/gh87-types-proto, feature/gh23-inline-subs and feature/gh14-native-types. The branch with the biggest memory savings is feature/gh9-warnings-xs.



    ./Configure -sder -Dusedevel -Dusecperl
    make -s -j4
    make -s -j4 test
    sudo make install


    ./Configure -sder -Dusedevel -Dusecperl \
      -Accflags='-msse4.2 -DPERL_FAKE_SIGNATURE' --optimize='-O3 -g' \
      -Dinstallman1dir=none -Dinstallman3dir=none -Dinstallsiteman1dir=none \
    make -s -j4 ECHO=true
    make -s -j4 ECHO=true test
    sudo make install

Debugging with private archlibs and exename:

    git=`git rev-parse @|cut -c-7`
    archname="`uname -s`-debug@$git"
    ./Configure -sder -Dusedevel -Dusecperl -DDEBUGGING \
      -Accflags='-msse4.2 -DDEBUG_LEAKING_SCALARS -DPERL_FAKE_SIGNATURE' --optimize='-g3' \
      -Dinstallman1dir=none -Dinstallman3dir=none -Dinstallsiteman1dir=none -Dinstallsiteman3dir=none \
      -Darchname="$archname" -Darchlib="/usr/local/lib/cperl/5.22.1/$archname" \
      -Dsitearch="/usr/local/lib/cperl/site_cperl/5.22.1/$archname" \
      -Dperlpath="/usr/local/bin/cperl5.22.1$exesuff" -Dstartperl="#!/usr/local/bin/cperl5.22.1$exesuff" \
    make -s -j4 ECHO=true
    make -s -j4 ECHO=true test
    sudo make install

Incompatible changes

cperl tries to follow the old perl5 spirit and principles, unlike recent perl5 changes, which wildly deviate from it.

cperl can parse and run 99.9% all of perl5 code. But there are a few incompatibilities, which arise from late perl5 signatures design changes and 2002 constant folding changes which we do not follow. perl5 signatures are marked as experimental after all.

CPAN works. Some modules currently need patches in our distroprefs repo, where the maintainers refuse to support it. See

Some internal modules are explicitly modernized, which is denoted by the c suffix in the version number. Those modules use proper signatures and hereby do safer type-checking at compile-time, and are also 2x faster because of using signatures. Some toolchain modules also switched over to support 10x faster and more secure XS variants of JSON and YAML, and support the cperl improvements (i.e. builtin strict, DynaLoader, XSLoader) and add more security (i.e. DynaLoader, Storable, YAML). These modules usually get a +1 major version bump, so you cannot easily override them accidently with worse cpan updates. Such as EUMM, bignum or Test2.

Illegal prototypes die, are not stored

In perl illegal prototypes warn with 'illegalproto' and are stored as such.

In cperl illegal prototypes in signatures used without :prototype() immediately die, they cannot be suppressed with no warnings 'illegalproto'; and they are not stored.

Rationale: Illegal prototypes are parsed as signatures. Illegal signatures throw a parser error. The 'illegalproto' warning is only thrown within explicit extra :prototype() declarations.

Technically this is not a incompatible change, as signatures are marked as experimental.

@_ is empty in functions with signatures

We only copy or reference arguments to signature variables, but not to @_ also. @_ is empty when signatures are declared.

Rationale: With signatures copying all values to @_ also leads to double copying which is 2x slower. @_ is not needed anymore. Either use no signatures to use @_ variables or use signatures.

Technically this is not a incompatible change, as signatures are marked as experimental.

Empty signature variables $ die

Using a bare $ sigil signature variable is illegal in cperl, but legal in perl.

Rationale: This clashes with prototypes. A bare $ is a prototype declaration, not a signature. Use a name in the signature and don't use this name in the body to get back to the same behaviour.

Technically this is not a incompatible change, as signatures are marked as experimental.

negative integer modulo

With typed integer variables or integer constants the modulo operator in cperl works like with use integer, it uses the libc functionality for "%"; using the i_modulo operator which is different than the generic perl5 modulo operator.

Perl5 without use integer uses a different modulo definition: "If $n is negative, then $m % $n is $m minus the smallest multiple of $n that is not less than $m (that is, the result will be less than or equal to zero)" from "Multiplicative Operators" in perlop.

cperl with typed or constant integers behaves like perl5 with use integer or before 2002, when this was changed (e7311069). The compiler simply promotes modulo to i_modulo if both arguments are integers at compile-time. To fall back to the old behavior use untyped variables and no integer.

    -13 % 4 => 3
    13 % -4 => -3
      use integer;
      -13 % 4 => -1
      13 % -4 => 1

    -13 % 4 => -1
    13 % -4 => 1

    # old perl5 behavior:
      no integer;
      ($a,$b) = (-13,4);
      $a % $b => 3

Background: In p5p was arguing that it is using the mathematical "correct" semantics, vs this definition from the C standards committee:

"When integers are divided, the result of the / operator is the algebraic quotient with any fractional part discarded. If the quotient a/b is representable, the expression (a/b)*b + a%b shall equal a."

Perl does not obey this, only with use integer.

    use integer; my ($a,$b)=(13,-4); print int(($a/$b)*$b) + $a % $b => 13

    my ($a,$b)=(13,-4); print int(($a/$b)*$b) + $a % $b => 10

We found only once place in a module using constant negative integers with %. It is a core module, and is therefore fixed. Testing many other CPAN modules found no further problems, but watch out our distroprefs repo for future patches.

Rationale: This is a bad perl5 design change and a side-effect from using type-promotion before constant folding. cperl uses proper type dispatch and with both integer arguments it promotes the modulo op to i_modulo, which behaves differently (or "normally"). The perl5 constant folder should do the same, but does not currently. Using extra logic to prevent from using the libc implementation also makes perl5 "%" operator slower.

compile-time constant folding overflows with integer literals

Operations with only literal integers will not overflow to numbers, similar to perl5 with use integer.

Only "+" and "*" will try to use UV (an unsigned value) as result, but not NV (an inexact number). All other ops besides divide and shift return IV (signed integers). Note that even comparison operators are integerized.

    my $iv_min = -(~0 >> 1) - 1;   # ok
    my $iv_min_1 = -(~0 >> 1) - 2; # BUT this overflows
    my $iv_min_1 = -(~0 >> 1) - 2.0; # avoid overflow

    my int $z = 4 / 5; # => number 0.8, violating the type checker
    # but!
    $z == 0; # => TRUE, because lexical int == const int uses i_eq,
             # the integer variant, which integerizes $z to 0

All coretests pass, no tests had to be changed. In practice only compile-time constants in the negative IV_MIN to UV_MIN range will need to be fixed.

Rationale: This was the original issue 2002 which disabled automatic integerization of all arithmetic ops with constants. "+" and "*" are a good exceptions to allow unsigned results (with the new u_add and u_multiply ops) to create large constants, but all other integer operations need to be preciser when inexact numbers are demanded, and return signed integers.

Exceptions, keeping the old behaviour: Division for two integer constants is an internal exception and is not promoted to use integer division. 2/5 => 0.4 Compile-time shift operations return unsigned integers.

Fix breakage and bad design


The p5p signature implementions is still lacking many important features and is twice as slow as doing without, and twice as slow as the cperl implementation which uses the stack variables directly without copying them to @_. There is no point in using it. It is the biggest cause for backwards incompatible changes, but it is marked as experimental, so the perl5 implementation can eventually be improved, and our changes are technically not incompatible.

Add optional types in signatures
    sub (int $i)   # as in Perl6, or
    sub ($i: int)  # as in the other gradual typed languages

We use the samed syntax as provided for lexical variable declarations. In both variants, in leading position as with my int $a; and as attribute, as with ($i :int :const)

We need to seperate coretypes (int, uint, str, num) and user-defined types (existing class names), and the 2 core attributes :const and :unsigned. For more perl6 like traits see below.

Follow the same rules as in lexical declarations. The type must exist already as package, otherwise a syntax error is thrown.

    $ cperl -e'sub x(x $x){}'
    No such class x at -e line 1, near "sub x(x"

    $ cperl -e'sub x(str $s){}'          # coretypes implicitly loaded

    $ cperl -e'%MyStr::; sub x(MyStr $s){}'  # user-defined type MyStr
Add subroutine return types

For easier implementation we support subattributes, :<type> only, not the other possible syntax variants => type or returns type.

There are just a few semantic conflicts. Note that we can use the builtin attributes :lvalue, :method, :const also here.

:const does not mean constant result, it rather means constant subroutine. Having this constant means that the compiler is able to inline it without run-time checks if it has changed.

:unsigned as coretype or :uint? As sub attribute it could mean strictly typed to a return result of UV, like :int :unsigned, where the :int is optional. :-unsigned would mean :int then, return a signed int. :unsigned could also be no coretype, just a hint for :int, and without :int it will just be an attribute, not a strictly checked coretype. This decision is still open. For now we use :uint.

Add call by-ref via \$arg

Support scalar lvalue references - sub (\$var)

With perl5 upstream all arguments are copied only, as with my $arg1 = shift; but alternate syntax for fast $_[0] access is not provided. So they have to keep the otherwise unneeded @_ array around.

cperl uses \$name to denote references to scalar lvalues, which change the calling variable.

    sub myfunc(int \$i) : int { $i++; }
    my $i;
    print myfunc($i); => 1
    print $i;         => 1

For now scalar lvalue references only, \@a or \%h would be nice with type checks for arrayref or hashref. maybe \[$] also.

Improve @_ handling

Remove @_ when not needed. Use the mark stack as in the ops.

With cperl @_ will only hold the &rest args, the undeclared rest if no other slurpy args are declared. i.e. @_ will be empty when signatures are declared with a slurpy ending @ or % arg, and @_ is not referenced in the immediate function body, visible to the compiler, i.e. not hidden by a string eval.

We want to use the mark stack for signature calls, same as with OPs and XS calls. We don't need to copy to @_.

Notes on improving the current old zefram purple signatures:

Internally the elements of @_ are currently accessed via aelem, not aelemfast. But this is moot with the introduction of the new OP_SIGNATURE op, which is even faster.

With the old 5.18-5.22 implementation the perl5 arity check is overly big and needs two checks and big strings with fixed arity subs.

    perl -MO=Deparse -e'sub x($a){$a++}'
    sub x {
        die sprintf("Too many arguments for subroutine at %s line %d.\n",
          (caller)[1, 2]) unless @_ <= 1;
        die sprintf("Too few arguments for subroutine at %s line %d.\n",
          (caller)[1, 2]) unless @_ >= 1;
        my $a = $_[0];

which violated the existing error message: "Not enough arguments for "

Support default $self invocant with methods

If a method is declared via method the $self argument is used as default invocant argument name, which can be overridden via the ($class: $args,...) colon syntax.

    method adder ($a) { $self->add + $a; }

With the method attribute you have to provide it:

    sub adder ($self, $a) :method { $self->add + $a; }
Add :pure attribute for subroutines

You can flag a function as purely functional, without any side-effects, to allow further compiler optimizations. Note that pure functions may throw, but may not access globals or do IO.

A pure function only reacts to its input arguments and will always return the same value with the same arguments, thus can be safely memoized or constant-folded or inlined without having to embed them into ENTER/LEAVE blocks.

Improved error reporting

In violations do not only print the position, also the declaration which is violated. e.g.

    @a=(); sub x(\@b) {$b->[0]++} print x(\$a)

Type of arg 1 to main::x must be arrayref (not a scalar ref) at -e line 1, near "\$a)"


Type of arg 1 \@b to x must be arrayref (not a scalar ref) at -e line 1, near "\$a)"

Proper signature types are not only a great help for catching errors early or improve documentation. They are performance critical, see coffescript, dart, microsoft typescript, google soundscript, facebook hack, mypy, ruby 3.0 and partially even perl6. The type inferencer will not be able to infer many types without explicit types. But with typed signatures, besides the obvious solution of private methods or closed classes we can inline most small methods, ignore run-time magic and improve most loops and array accesses. It is also critical to implement multi methods (compile-time optimized generics) and implement an advanced object system.

Note that the reported main subroutines are listed without the main:: prefix.

Support ... for efficient varargs passing (NY)

... as empty function body already has a special meaning as yadayada operator, just croaking, but interestingly not the usual meaning of varargs.

cperl uses ... in the natural way to denote unnamed and uncopied rest args, and passes the varargs through to the next call.

... denotes a slurpy unnamed signature, and ... in a otherwise non-empy function body denotes passing those arguments efficiently to the next function. Internally ... does not need to copy the values into a temporay array, we just need to pass the argument stack position down to the next calls using .... By using ... instead of @_ we can avoid copying the values to @_, we only need the stack index, not all the values.

    sub foometh($self, ...) { func(...) }

In an extern sub declaration, the ... denotes varargs as in C.

strict prototype and signature syntax, no pragmas required.

Illegal prototype and signature syntax does not just warn, it dies with an syntax error as it should be.

    $ cperl -e'no warnings "illegalproto"; sub x(x){}'
    No such class x at -e line 1, near "sub x(x"

no warnings "illegalproto"; is a noop.

no feature signatures or lexsubs or lexical_topical pragmas required

use feature "signatures" or use feature "lexsubs" is not required and is ignored. All prototypes and signatures are parsed either as prototypes or signature, regardless of the scope of a use feature "signatures" pragma. my sub is parsed without previous activation.

my $_ does not require use experimental::lexical_topic. use cperl can be used, if incompatible features are used, but it is optional.

Changed calls to signatures

goto to a signature is now a true tail-call. The stack and pad frames (lexical variables) are not duplicated as in a recursive call, they are re-used. The call-stack behaves as before and in python, so a tail-call (i.e. a goto to a signature) is visible in the call-stack.

But since the args are passed directly on the stack, any old &$sub; call without () to a signature will not work, you need to use goto \&$sub; instead. Only tailcalls via goto can translate from the old pure-perl stack to the new re-used stack via signatures.

The error message is "Not enough arguments for subroutine \w+. Want: \d, but got: 0.".

How to detect if a subroutine has a signature?

prototypes with a signature are always embedded into parentheses, "()".

    sub _hassig {
        my $sub = shift;
        substr(prototype $sub,0,1) eq '(';

    return _hassig($sub) ? goto \&$sub : &$sub;

See e.g.

But you can also change all calls via &$sub; to goto \&$sub; to avoid the _hassig check.

Fix my $_ handling

Perl5 had long-standing problems with lexical $_ since the introduction of the SASSIGN optimization via OA_TARGLEX and OPpTARGET_MY in 2002.

cperl fixed this (it was using a wrong bit-testing, using AND as OR) and work is ongoing to harmonize further internal code exceptions and code-smell, e.g. SASSIGN, given/when and the match functions, everything with nested blocks.

Undo B bootstrap breakage

B was changed to use strict, which broke the B::Bytecode compiler performance advantage, adding all compiler internal constants to the emitted bytecode. Reverting this breakage was denied because this developer did not understand the code. It is too bothersome in the long run to maintain our reversion of this breakage over years. It easier to check for usecperl in the compiler to be able to compile to bytecode properly again. Note that they broke it again with 5.22, and are again refusing to fix it.

Undo constant folding i_opt de-optimization

Automatic integer optimizations for constants were removed from constant folding against community consent in the early p5p times with commit e7311069.

We re-add this optimization to treat constant integers and typed lexicals as such (as via an implicit use integer) in constant foldable expressions. I.e. integer overflow in constant expressions is only checked at compile-time, not run-time.

We re-instate the two special cases for I_DIVIDE and I_MODULO, which deviate from the untyped generic variants. div int / int returns a float even with constants, and mod with negative integers uses the standard C variant only if typed or within use integer. The untyped variant uses the perl5 definition of modulo as in group theory, which violates the definition of a remainder.

    my int $i = -3; $i % 2 => -1, not 1.
    -3 % 2 => -1, not 1.
    1/2 => 0.5 (unchanged)

We are also now able to do constant-folding on subroutine bodies, to either inline the body into the caller or replace the body with a constant in the general case. (i.e. without the () prototype)

Undo support for binary names

By announcing unicode support for names with 5.16 p5p silently allowed \0 inside names, which they called an advantage by supporting now binary safe names and "harmonization". In reality unicode names were already supported since 5.8.4 (with negative HEK key lengths) and the whole 5.16 unicode name theatre was only about binary names. They didn't support binary names in all other code parts which had to deal with names, and thus enabled a huge attack vector to hide arbitrary user strings behind \0 names, which was silently stripped before and when used in syscalls. They moved resonsibility to the user, as previously for input strings only, now also for input names, e.g. for package names which search the filesystem directly without sanitation or proper checks. Support for those binary names is still not complete with 5.24 in core, even if p5p pushed so strong for it against my protest.

We have to keep the new GV API - accepting the string length - but even without strict names we strip everything behind the \0 as before 5.16.

For a more efficient dynamic namespace implementation we might be switching from chained hash tables to a single ternary trie, radix tree or DAFSA, without support for \0 and maybe even without optional support for unicode names. Only with use utf8 we might need to fallback to the old slow method then.

Warn on \0shellcode attempts for names

Make our use warnings "syscalls" the default.

Any attempt to attack package names with shellcode behing \0 is being warned per default, and not only optionally with use warnings "syscalls". There is no uninnocent or wrong usage of such names, only malicious intent, and this must appear in logfiles. This is worse than syntax errors and syntax errors are warned by default. p5p was vehemently against this change.

cperl has a new 'security' warnings category, which bypasses STDERR capture and also tries to log the remote user IP.

strict names

With use strict "names" we do not accept unparsable symbols created from strings. This is a new run-time error for use strict.

TR39 confusable names

cperl rejects since 5.26 confusables as described in TR39, rejects most mixed scripts and normalizes unicode identifiers, similar to python 3. cperl is actually one of the very rare dynamic languages with full unicode identifer support which actually does follow the unicode consortium security recommendations and profile. cperl implements the Moderately Restrictive level for unicode identifiers. See "Identifier parsing" in perldata

The compiler had to add this warning since 5.16 until 5.26:

Perl handling of new unicode identifiers - package and symbol names - without proper TR39 handling is considered a security risc and is not fully supported. See

Check your code for syntax spoofs, confusables, strip \0 from package names and enable use warnings 'syscalls'.

Undo the double readonly system

In order to support Hash unlock code, i.e. undoing readonly setting of hash values, p5p added a second readonly bit SVf_PROTECT for special values which are not allowed to be writable, even if the better solution to check for this special values at Hash unlock would have been trivial. It does not need to take away the last free SV bit which we used to implement coretype checks on pad values, and unlock really only needs to unlock the previously locked values, not make all values unconditionally writable, thus making previously readonly values writable. SVf_PROTECT does not help with that.

SVf_PROTECT is now the same as SVf_READONLY, and special checks were added for the two usecases when the readonly bit is unset. We need the SVf_PROTECT bit to mark native SV's in pad's, and even reserved it in 2012.

Fix the hashes

Provide proper hash table abstractions. We don't need 5 times the same bad code copied along for all different kind of HEK (hash key) types.

Do not check the hash key for collisions with 4 different comparisons in the hot hash loop, use one instead.

For the old hash tables use the new default strategy PERL_PERTURB_KEYS_TOP to move each found bucket to the top of the chain. This is how you usually implement a slow hash table with linked lists.

Use fast hash functions, not secure slow ones. We get security by fixing the algorithmic problem in the collision handling, not by using extremely slow hash functions and obscuring the users and fellow developers. We properly analyzed many hash functions and hash tables, for security and speed. The fastest, FNV1a, was the one the fellow developer schmorp choose by his guts in his stableperl fork.

Provide a Configure argument to define the hash function: -Dhash_func=FNV1A

Make the load factor definable, and change the default from 100% to 90%, which was tested as superior. Use -Accflags='-DHV_FILL_RATE=100' for the old behavior.

Further plans:

Hash functions need to be implemented as macros, not functions, undo that. (maybe)

Use cache-friendly open addressing, not simple, slow and DOS'able (i.e. insecure) linked lists.

Seperate the keys from the values to fit the search into a cache line.

Added support for restricted hashes, esp. stashes which we needed for our class implementation, to detect wrong fields at compile-time already. restricted hashes are not properly supported in perl5.

Later we will provide a special :const hash table type to enable optimizations to perfect hashes. With study %hash you can do the similar costly optimizations on non-const hashes at run-time to allow faster key access.

Compile-time attribute hooks

Added a CHECK_SCALAR_ATTRIBUTES callback. Add native :const, :int, :num, :str, ... attributes for all new core types. This is basically a read-only MODIFY_*_ATTRIBUTES hook at compile-time with a better name to disassociate from the run-time check of FETCH_*_ATTRIBUTES with my lexicals. See also "Rewrote critical core modules in C as builtins" below.

Run-time attribute variables

cperl defers the attribute->import call from compile-time to run-time with non-constant attribute arguments. See

coretypes: Int, UInt, Num, Str

coretypes can only be built-in, there's no way to implement it as extension, similar to an object system. cperl provides the 4 basic ones, and the combinations with Undef, and the lowercase native types. Same as in perl6. Type combination are done via @ISA, i.e.

    class ?Int :const { our @ISA :const = qw(Int Undef); }

Our coretypes classes and its members are readonly. Provide fast ops variants for these types to omit type checks and magic calls at run-time. Scalars declared as Int, Uint, Num, Str cannot hold magic associations, such as tie.

We also enabled the :const attribute for all data: scalar, arrays, hash, functions, packages+classes.

At compile-time most UNOP's and BINOP's are promoted from the generic ops to more specific typed ops, similar to use integer. But use integer does not know the types of the variables at compile-time, many ops are only dispatched at run-time. See e.g. "negative integer modulo".

See "coretypes" in perltypes and "Constant and typed lexical variables" in perldata.

native unboxed types: int, uint, num, str (NY)

Internally the types for all scalar SV's always start with uppercase classnames, same as with most user-defined classes. The four lower-case variants int, uint, num, str denote possible optimizations to direct unboxed values on the stack, which are not reference counted, and cannot yet be used across function calls. They are only safe to use within certain op sequences, and those optimizations are done automatically.

Unlike all other types native types are only hints, not promises. The compiler promotes data and code to native types only if its sees fit.

With PERL_NATIVE_TYPES enabled, most literal constants are stored as native types, native type declarations are a promise not a hint, and the optimizations involves up- and downgrading of data and ops in possible native chains. This leads to much tighter native expressions, with performance and memory gains. (3x less memory, ~3x faster).

For the builtin FFI we provide also FFI-specific native types, like int32, int64, uint32, uint64, ptr and more.

See "native types" in perltypes.


Provide a compile-time type inferencer, type checker and type optimizer. The inferencer runs automatically and can currently only infer int on array indices, ranges and str on hash keys, but has to give up on magic, dualvars, and no strict 'refs'. With the help of declarations and type checks, as e.g. in smartmatch or given/when with type support it can infer much more.

    if (type $a eq "int") {  => $a is an int in this scope }
    $str =~ /(\d+)/; => $1 is a typed Int

Compile-time type checks need to be enabled with use types; though.

Typed signatures are backwards incompatible to perl5, as the trivial 4 line changes are still not yet supported upstream. The performance win is ~2-10x faster, you get compile-time type warnings, a business friendly coding environment and the possibility to display and put infered types automatically in your code, with a cooperating editor. e.g.

    my $n=1000;
    for (my $i=0; $i<$n; $i++) { }
    my int $n :const = 1000;
    for (my int $i=0; $i<$n; $i++) { }

Note: When in doubt leave out types. If the inferer cannot find it, it might not be worth the trouble. But for hot code and to be precise always use types, as compile-time types prevent from costly run-time checks for types and magic hooks.

Builtin types are the coretypes Int, Num, Str, UInt, ?Int, ?Num, ?Str and for builtin op-dispatch: Void int uint num str Int UInt Num Str Bool Numeric Scalar Ref Sub Array Hash List Any, with a ? prefix denoting | Undef "or undef", a ? suffix is optional, and for aggregate types using () brackets, like :Array(:Int).

Status: User code in pure perl or XS is currently not typed-checked nor inferred, only internal ops. But wrong type declarations do lead to compile-time type violation errors.

:const for all

Our :const attribute applies to all data types: scalar, arrays, hash, functions, packages + classes. :const hashes should of course be perfect, i.e. optimized to constant-time lookup, eliminating hash collisions.

The following declarations are all compile-time assigned, and allow subsequent constant folding on all usages. I.e. const assignments with all constant rhs values.

    my $i :const = 1;
    our $i :const = 1;
    my @a :const = (1);    # also sets the array shaped as int @a[1]
    my @a :const = (1,2);  # shaped as int @a[2]
    my @a :const = (0..2); # shaped as int @a[3]

With non-constant right-hand side values the assignment is done at run-time and as thus only writes are caught at compile-time, but constant folding is not available. Hashes are also not yet compile-time assigned.

:const arrays with values of unique types will be optimized to native shaped arrays. See "Typed and shaped arrays".

See ":const" in perltypes.

Compile-time optimizations

cperl adds many more traditional compile-time optimizations: more and earlier constant folding, type promotions, shaped arrays, usage of literal and typed constants, loop unrolling, omit unnecessary array bounds checks, function inlining and conversion of static method calls to functions.

Perl 5 only inlines constant function bodies with an explicit empty () prototype.

    sub x() {1+2} # inlined in perl5
    sub x   {1+2} # inlined in cperl only

cperl inlines constant function bodies even without empty prototype declaration, has type declarations for most internal ops, and optimizes these ops depending on the argument types; currently for all arithmetic unops and binops, and the data-accessing ops padsv, svop, and sassign. opnames.h stores PL_op_type_variants, all possible type promotions for each op. opcode.h stores PL_op_type with the type declarations of all ops.

Small non-constant bodies may be inlined automatically, with the args replaced in the body. constants args and literals are replaced as is, also args as simple rvalues. lvalue args are checked for call-by-ref or call-by-value semantics, and use the arg either directly or as copy.

With shaped arrays and it's new unchecked aelem_u variants, loop bodies are optimized when the upper loop bound is declared via $#, the arylen.

    for (0..$#array) {  .. $array[$_] .. }

Here the $array[$_] call does no bounds check of the index, since $#array is the last valid index, and we do not shrink the array in the loop, even if the array is not explicitly declared as shaped array.

Work is ongoing in loop unrolling, function inlining and speculative method inlining, which should speed up run-time performance dramatically and enable new optimizations which were previously stopped on each function call border. Planned is also polymorphic inline caching with a usage counter, not just naive monomorphic inline cache for method calls. This is needed for the jit optimizer.

Check the new DEBUGGING option -Dk which lists all optimizations and checks at compile-time, optionally verbose together with -Dkv.

Static method calls

    => strict::import("strict", ...)

When the method is defined directly in the package, it is not possible to inject another package at run-time into the method search, thus the method call is optimized from a dynamic method dispatch to a normal static function call.

Proper object system

No, not Moose, rather perl6 without the antics. Closer to Mouse though, which is currently the recommended OO. Mouse with a perl6-like syntax. Rather an optimizable perl6-like object system in core, similar to an extended base/fields OO with pseudohashes, roles, mop and multi-dispatch. See perlclass.

Provide a simple MOP for reflection purposes, but not yet for overrides, i.e. metaobjects for classes, proper class and method syntax, anonymous classes by pointer not name, proper multi dispatch with types, roles, class and method lookup by pointer, not by name. Create native optimized shapes via mixins as in perl6 or p2 (an enhanced bless). Lexical methods are private, optimize dispatch for single inheritance. i.e the convenient class syntax extends on a single class only, classes are finalizable by the calling application.

Support multi-dispatch in perl6 syntax and optimize for early-bound method calls with typed arguments and closed classes.

Class dispatch is via C3, not the old depth-first left-to-right mro implementation. Many old core modules have broken inheritance, so C3 could not yet be enabled as default.

For type inference the special Mu methods new and CREATE are detected and cause the left-hand side of a lexical assignment to progate its type into the lexical.

Typed and shaped arrays

Enable faster array access, uniformly typed array values, use less memory (cache friendly), help the type system. See perltypes.

Type arrays specify the uniform type of the values. Typed arrays with native types use much less memory and provide faster direct access. Natively typed arrays allow access and passing as arguments to external C functions.

Shaped arrays define a compile-time constant "shape", the size, which cannot be changed. All values are pre-initialized.

  my int @a[20];         # or
  my @a[20] :int;

The number of elements can be compile-time computed from the elements of a rhs list of all constant expressions, similar to the initialization of :const arrays. See ":const for all".

  my @a[] = (0..2);

With known indices the compiler can omit bounds checks on array accesses.

    my int @a[5];
    for (0..$#a) { $a[$_] ... }

Here $a[$_] uses the unchecked aelem_u operator, because the index $_ cannot be out of bounds.

Shaped arrays are pre-initialized according to its type and cannot change the size. Attempts are caught at compile-time and run-time. With constant or type-checked indices in range the access op is optimized to omit the bounds check, via unchecked aelem_u variants. Negative constant indices are converted to positive at compile-time.

Typed and shaped hashes (NY)

  my int %h;             # or
  my str %a;             # hash with str values only
  my int %a{20};         # hash with int values only. fixed hash size,
                         # no grow on insertion.
  my str %a{20} :const   # fixed hash size, no grow on insertion,
                         # perfect hash (keys may not change, values do).
     = (...);

No sparse arrays (i.e. hash with int keys) yet. but this would need a different declaration syntax if to be supported natively.

e.g. my int %a{int}; - sparse array with int keys and int values. or my %a : hash(int);

variant 2: my IntSparseHash %a, which can go with a user class and methods, but this will be slow, without native ops.

Inlined functions (NY)

If a function body is inlinable, i.e. simple, with no control ops like return, goto caller, warn, die, reset, runcv, padrange, adds no seperate lexicals and has less than 10 ops, they are inlined. Static methods are converted to functions before and then possibly inlined.

This is the most important optimization, even more important then a jit.

Rewrote critical core modules in C as builtins

Builtins: strict, attributes, DynaLoader, XSLoader. NY: Carp, Exporter.

As shared lib: Config and later warnings and unicode folding tables to save memory, startup time and reduce bloat.

Big constant hashes and tables need to be in a shared memory segment, not recompiled for every fork or thread, similar to the Encode tables which are implemented properly. The risc to introduce even more performance regressions by keeping some critical core modules as .pm is too high and broke the compiler too often. Most developers have no idea of the impact of innocently looking additions.

We need to reduce memory, and want to reduce the size of compiled code by 30%, but in some cases it will be 200%. As builtin or shared library we go to zero startup-time overhead for those modules. With compiled Config alone the memory savings are down to 5.6 levels.

strict 1.11c as builtin

Starting with cperl (based on Perl 5.22) strict is now a builtin module, implemented as XS functions which are always available.

Changes: is only provided for documentation, $INC{''} = 'xsutils.c' With a list of wrong tags only the wrong tags are reportyed one-by-one, and not together. All other functionality stays the same.

strict hash two new modes: strict hashpairs and names.

attributes 0.26_01c as builtin

Starting with cperl (based on Perl 5.22) attributes is now again a builtin XS module. There's no need to dynaload it at parse time. moved back to lib/ and is only provided for documentation and import.

CHECK_type_ATTRIBUTES is a new compile-time hook, like a readonly variant of MODIFY_type_ATTRIBUTES, or the compile-time variant of FETCH_type_ATTRIBUTES.

Attribute arguments can now be variables, deferred from compile-time to run-time.

There are several new builtin attributes:

:const for all types
unsigned for all integer types, sets SvIsUV_on|off
existing classes as types are recognized

and stored for lexical types and subroutine return types.

some new FFI attributes

DynaLoader 2.00c and XSLoader as builtins

Starting with cperl (based on Perl 5.22) DynaLoader and XSLoader have no perl code anymore, was rewritten as dlboot.c.


@dl_library_path eliminates now all duplicate paths and resolves symlinks of $Config{libpth} at build time.

Only the $ENV{PERL_BUILD_EXPAND_CONFIG_VARS} settings are implemented. All Config settings are compiled in at build time, run-time changes are not honored. Config is now also a compiled module, is gone, and its hash was always readonly, so there's no way to change Config values at run-time without recompiling it.

Not sure yet about keeping support for .bs hooks and @dl_resolve_using.

The XSLoader::load_file($module, $modlibname, ...) function is new, XSLoader is a builtin also. ... is passed to the loaded XS function as with XSLoader::load($module, ...).

Config as XS extension

It is compiled as shared library, with all keys as readonly perfect hash. Some internal variables are not accessible anymore, the API is via the documented functions. See our Mock::Config module if you need to change a Config value for tests.

Also usable for perl5, via cpan XSConfig.

warnings as XS extension (NY)

It is compiled as shared library, the builtin categories are implemented as perfect hash, and extended with a normal perl hash. Some internal variables are not accessible anymore, the API is via the documented functions.

Status: 1 scope bug with Carp. Not yet enabled.

Added the compiler back to core

The B::C testsuite runs too long for cpan users, and it needs to be developed in sync to avoid typical 6 months wait-time after a core change. The compiler is smoked on all smoker platforms (linux 64bit, linux -m32, darwin, windows 32+64bit).

Maybe provide python-like precompiled ByteCache .pmc as default. You could pre-compile then modules with higher optimization levels, esp. the type inferencer. .pmc handling has been extended for reflection inside a .pmc, needed for the jit cache.

Maybe include a Data::Compile module to dump only data without all the code to a shared library, and possibly Perfect::Hash and a new ph.c to create and optimize readonly hashes, which is needed for the shared XS hashes of Config, warnings and unicode tables.

Backport core testsuite fixes for the compiler

Honor differences between compile-time und run-time, when run compiled. Other than a few wrong testcases, the compilers does pass the core testsuite.

See for a generally improved testsuite for perl5, cperl and B::C.

libffi in core

Declare extern functions and libraries and call them. There's no need for XS and seperate compilation for most bindings. Not everybody has a compiler, the very same compiler perl was compiled with. libffi is the slowest ffi library, but has the best platform support, as it is integrated with gcc for java jni support.

   extern sub atoi(str $s) :int;
   extern sub itoa(int $s) :str;
   extern sub printf(str $s, ...);

Note that this deviates from perl6 synatx with is native, but perl6 uses the is traits for all function attributes, which we do not. And we prefer the more natural C-like syntax over the more obscure perl6 syntax.

The alternative syntax to extern sub is via the :native sub attribute, which does allow options, as in sub atoi(str $s) :int :native($libc);.

Later with a jit in core we would not need the external libffi dependency.

unicode folding tables as XS extension

They should be compiled as shared library, with all keys as readonly perfect hash or as trie. Some internal variables are not accessible anymore, the API is via the documented functions. The trie lookup should not be implemented vi pure-perl (80x slower) and use esp. the memory in a readonly shared library for huge space savings (2M).

Maybe seperate them into a light folding table and heavy names and properties.

This was done by p5p with 5.30 eventually.

Plans for further core features

Lexical methods

Lexical methods are of course private to its enclosing class, i.e. not visable from outside the class. And they are also closed (or also called sealed or final) as planned by Damian Conway, i.e. they cannot be changed later. This enables the compiler to inline them automatically when its worthwhile. e.g. when they are small enough. They are either defined in the new style: class .. { my method .. } or old style: package .. { my sub .. }

Carp 2.00c as builtin

Carp might be a implemented as builtin XS module. There's no need to require it. is only provided for documentation, $INC{''} = 'xsutils.c' Many carp function are added to the perl5 API and available to core and extensions.


Currently shortmess is only simplified, the step to skip packages which trust each other (via @CARP_NOT or @ISA) is not yet implemented, neither is the CARP_TRACE formatting hook. The deprecated $Carp::CarpLevel variable is now ignored.

Other not yet implemented variables: $Carp::MaxEvalLen, $Carp::MaxArgLen, $Carp::MaxArgNums, $Carp::RefArgFormatter


This branch is currently not included. It's too instable to get the caller depth right from pure perl vs XS, and most extended hooks (format, CARP_TRACE, CARP_NOT) are not yet implemented. Note that XS calls usually do not get counted in caller, unless you use the SCOPE keyword.

Remove Attribute::Handler from core

This evals all attributes at compile-time. Discourage its usage.

Longer term goals

Faster functions and method calls

Optionally omit caller, @_, freetmps, exception handler when possible, i.e. no string eval's are present in the body, the compiler can detect it or the right compiler hint is given.

tailcall elimination

Detect tail positions and replace the call with a fast goto. This needs to be in core, not as external module. Without tailcall elimination handling longer lists leads to stack exhaustion. python did the same mistake as p5p by enforcing a new context for every goto. goto to signatures are now real fast tailcalls.


Use the existing perl6 S06 synopsis and testcases. In the absence of a signature to the contrary, a macro is called as if it were a method on the current match object returned from the grammar rule being reduced; that is, all the current parse information is available by treating self as if it were a $/ object. Macros may return either a string to be reparsed, or a syntax tree that needs no further parsing.

Change the syntax a little bit to harmonize with perl5:

    {{{ }}} => `` for unquote splicing.

Expand the ast inside the backticks at compile-time as a subprocess would be expanded with its result. Use qx() for subprocesses instead.

    macro infix:<name> () => macro 'name' :infix ()

Use perl5 attributes as perl6 operator overloaders to tell macros if they have non-standard position: infix, prefix, postfix, circumfix, postcircumfix.

Don't support the quasi attributes :ast/:lang/:unquote, other than :COMPILING. Support the COMPILING:: pseudo-package.

See the feature/gh261-macros branch.

Editor integration

With type inference we can provide a much nicer development environment which also supports the debugger. plsense for emacs should give type feedback. I'm thinking of a port of ZeroBrane Studio for perl. This means provide core support for the needed serializers and introspection facilities.


Add a jit for the easiest platforms, and provide dynamic javascript-based optimizations when stacks need to be replaced. Jit's are a bit overrated. a fast bytecode loop can easily beat a jit and optimizing compiler, see luajit2, or

A tracing jit, and not a method jit sounds best, together with the inliner, native op sequences and loop optimizations. Currently we are trying a simple llvm based method jit, which kicks in after counting methods and loops (similar to unladen-swallow and pyston, just smaller and faster), and inlines some pp runtime functions and hereby eliminations to function intro, parameter and return overhead, and then can apply the usual optimizations applied to the bodies without the call overhead.

Optimize the vm, the runloop

We carry around way too much bloat in the ops and the data, which is not needed at run-time. e.g. the compiler throws away the nested symbol table stashes if not needed, which frees 20% memory. All the op pointers are not needed at run-time. But think of a lua/p2-like redesign of tagged values and slimmer ops, and eventually put the stack onto the CPU stack.

Note that p5p argues the opposite way. They want to add even more run-time branches to the ops, without any justification.

Optimize special arithmetic op sequences to use unboxed integers and strings on the stack. We experiment in allowing unboxed values on the stack, because the stack is not garbage collected and not refcounted. We just need to be sure to box them before entering a non-collaborating sub, leaving a block with possible exceptions and stack cleanup. Those unboxed values are internally typed as :int and :str. Note that the coretypes :int and :str are not guaranteed to be unboxed, only if the compiler sees fit. In most cases those values are boxed but without a class pointer and magic attached. (Done in the native branch)

Maybe rewrite to a better register-based compiler with fixed-length 2 operands as in p2, but this might be too tricky for XS, mapping the global stack to the local stack. Probably no SSA (three arguments), just a cpu friendly two argument form as in p2/lua 5.1.

Cuurently there's a branch with linearized ops, called OPL without any op pointers, just indices into a oparray per sub. Similar to python, just with common-sized ops, not 1-3 word ops as in python. op_next is always the next op in the array, so cached. op_other are skips.

Allow faster XS calls, user-provided function calls and method calls. Provide support for named arguments in the vm, fast not via hashes. Many of the current io+sys ops are better implemented as library methods. With ~50ops instead of >300 the runloop might fit into the L1 cache again. Seperate calling of fixed arity methods from varargs. detect and use tailcalls automatically. Do not step into a seperate runloop for every single function call, only for coros, which do need to record the the stack information.

Run-time optimize the data, no 2x indirection to access hash and array structs. Provide forwarding pointers to single tuples to hold all. This could provide also the possibility for a GC if a second sweep for timely destruction is doable.

Optimize HEK for faster hash comparison, and use it for native str types, which might benefit from its immutability and the pre-calculated hash, len and utf8 fields.

Check readonly support for PL_strtab for the compiler. Builtin ro ph + dynamic as in warnings XS.

Better symbol table

Check converting the GV stash tree of hashes into a single global data structure, not a nested hash of hashes: Hash, AVL tree, Trie (TST or R² TST), Patricia trie or DAFSA (Deterministic acyclic finite state automaton) for faster dynamic variable and function name lookup. No binary names, all as UTF8. Maybe restrict to ASCII or valid identifiers to limit the trie memory (array of 26 vs 256). Stashes point then to trie nodes and need a HV check. Optionally provide partial read-only support for the compiler, as for PL_strtab. See the branch feature/gh127-gvflat.

Coro support

Keep native threads asis (this is not fixable, better remove it), but actively help coroutine and async IO support.

Untangle the IO layer mess

A stack is a stack.

Bring back proper `match` and `given/when`

With type support it would be even efficient and helps the inferencer. match needs to be structural, p5p smartmatch can stay dumb as it is now.


Possibly add a clp library, a constraint logic solver, with possible bindings to external sat solvers, like minisat, which can be included due to its small size and license. It is solved by checking for lvalue function calls in assignments, when the function is not declared as :lvalue.

    use clp;
    sub fact(int $i=0) :int { assert $i>=0; return $i ? fact($i-1) : 1 }
    say fact(7);     # => 5040
    fact($_) = 5040; # solve it!
    say $_;          # => 7

The type optimizer with advanced types might eventually benefit from its native performance.

Development policies

We favor community-friendly democratic development policies as e.g. in perl6 over the usual old-style dictatorial model. That means the powerful (those with management and commit roles) are not allowed to abuse their powers, while the powerless users are allowed and need to have the abilities to criticise them and their code.

In the old trust-based dictatorial model as e.g. in linux or perl5 the powerful call the not powerful abusive names ("asshole" or "jerk" is very common, or "trolls"), and are allowed to avoid discussions of features or problems by directly committing to master, rejecting tickets or selectively abuse their powers. This is forbidden in cperl.

We track stable upstream releases for our releases. Of each major perl5 release we merge eventually most of the p5p commits into cperl, but we make publicly clear beforehand in the merge window which commits will be rejected, and why. Everything is done in public branches. So you can raise your objections or improvements beforehand. Merge and reject follows the same development process as cperl bugfixes, features and security.

Features are developed in sprint-like 1-2 week cycles. If it doesn't work out in this period, switch to the next branches or sometimes continue on it. Several feature branches are already done, but not yet merged, because we wait for other decisions to be made.

With classes, types, compilable, company friendly


A true perl6-like and efficient object system is in work, and a lot of support had been added already: Core and User-types, handling of restricted stashes, parsing the new syntax, efficient fields and compile-time role composition.

One cannot just add classes and objects to core without a type system first.

It supports the perl6-like syntax for class, method, has, roles (mixins, compile-time composable classes), is, does, finalized classes (compile-time optimizations). Later method modifiers (:before, :after, :around), multiple type dispatch (multi, no need to overload methods), and easier perl6-like syntax, which closely resembles early perl6 designs, Damian Conway's perl5i and Moose, without its massive overhead.

See perlclass


Types are optional, and make code safer, faster and better documented. cperl includes builtin coretypes, native ffi types, user-defined types and type dispatch, checks, subtype relationships, inference and optimizations on user-defined functions and methods, for XS and PP. Later native aggregate types for arrays, hashes and native classes (structs).

See perltypes


perl5 proper started being compiler unfriendly with 5.16, with changed security and then COW handling with 5.18. It can be compiled but it is not recommended, as it features only about 5% memory savings. B::C can still strip the nested stashes (namespaces), but needs to keep all COW strings dynamic. There are possible workarounds such as storing all simple strings as hash keys, or making them immutable, but this is not a viable solution.

With cperl we can again compile to more efficient code, with >35% memory savings.

See B::C

company friendly

The optional types lead to better documentation, earlier compile-time type violations, less needed tests and more performance. Managers love this.

cperl uses a professional development process which is different to the old established dictatorial process of power-abuse, the right to commit to master of without discussion and developer burnout.

It is also better compilable.

How to detect cperl?

"$^V" =~ /c$/
config.h defines <USE_CPERL> defines usecperl
cperl changed modules end with c, typically _01c.
Libraries are installed into /usr/local/lib/cperl, not /usr/local/lib/perl5.

Type links

Most dynamic languages are currently in the process to get type support. This happened for perl5 at around 2002, but was never properly led (the developers had to leave p5p) and was then destroyed with 5.10, and then actively blocked for decades. You cannot do an object system without types. (microsoft's javascript with types) (facebook's javascript with types) Soundscript, google's javascript with types (planned python with types) (existing python with types) (ruby 3.0 planned with types) (a good existing ruby with types) (facebook's php with types) (php 7 types overview) (php 7) (php 7) (the old plan, ignored)