CARVIEW |
Select Language
HTTP/2 302
server: nginx
date: Tue, 29 Jul 2025 12:17:38 GMT
content-type: text/plain; charset=utf-8
content-length: 0
x-archive-redirect-reason: found capture at 20201203045247
location: https://web.archive.org/web/20201203045247/https://perl5.git.perl.org/perl5.git/log
server-timing: captures_list;dur=0.493281, exclusion.robots;dur=0.018490, exclusion.robots.policy;dur=0.008808, esindex;dur=0.010818, cdx.remote;dur=515.651796, LoadShardBlock;dur=111.819480, PetaboxLoader3.datanode;dur=77.643301
x-app-server: wwwb-app225
x-ts: 302
x-tr: 654
server-timing: TR;dur=0,Tw;dur=1817,Tc;dur=0
set-cookie: SERVER=wwwb-app225; path=/
x-location: All
x-rl: 0
x-na: 0
x-page-cache: MISS
server-timing: MISS
x-nid: DigitalOcean
referrer-policy: no-referrer-when-downgrade
permissions-policy: interest-cohort=()
HTTP/2 200
server: nginx
date: Tue, 29 Jul 2025 12:17:41 GMT
content-type: application/xhtml+xml; charset=utf-8
x-archive-orig-date: Thu, 03 Dec 2020 04:52:46 GMT
x-archive-orig-server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips
x-archive-orig-keep-alive: timeout=5, max=100
x-archive-orig-connection: Keep-Alive
x-archive-orig-x-crawler-transfer-encoding: chunked
x-archive-orig-content-length: 122186
x-archive-guessed-content-type: text/html
x-archive-guessed-charset: utf-8
memento-datetime: Thu, 03 Dec 2020 04:52:47 GMT
link: ; rel="original", ; rel="timemap"; type="application/link-format", ; rel="timegate", ; rel="first memento"; datetime="Fri, 26 Jun 2020 23:45:26 GMT", ; rel="prev memento"; datetime="Sun, 27 Sep 2020 08:38:33 GMT", ; rel="memento"; datetime="Thu, 03 Dec 2020 04:52:47 GMT", ; rel="next memento"; datetime="Sun, 11 Apr 2021 03:56:31 GMT", ; rel="last memento"; datetime="Wed, 27 Nov 2024 05:42:01 GMT"
content-security-policy: default-src 'self' 'unsafe-eval' 'unsafe-inline' data: blob: archive.org web.archive.org web-static.archive.org wayback-api.archive.org athena.archive.org analytics.archive.org pragma.archivelab.org wwwb-events.archive.org
x-archive-src: CC-MAIN-2020-50-1606141718314.68-0011/CC-MAIN-20201203031111-20201203061111-00233.warc.gz
server-timing: captures_list;dur=0.533029, exclusion.robots;dur=0.018455, exclusion.robots.policy;dur=0.008462, esindex;dur=0.011630, cdx.remote;dur=750.611445, LoadShardBlock;dur=145.287802, PetaboxLoader3.datanode;dur=92.382800, load_resource;dur=155.652592, PetaboxLoader3.resolve;dur=93.315003
x-app-server: wwwb-app225
x-ts: 200
x-tr: 1260
server-timing: TR;dur=0,Tw;dur=1803,Tc;dur=0
x-location: All
x-rl: 0
x-na: 0
x-page-cache: MISS
server-timing: MISS
x-nid: DigitalOcean
referrer-policy: no-referrer-when-downgrade
permissions-policy: interest-cohort=()
perl5.git.perl.org Git - perl5.git/log
This is a live mirror of the Perl 5 development currently hosted at https://github.com/perl/perl5
perldelta for the Win32 symlink()/readlink()/stat() changes
Document PERL_TEST_HARNESS_ASAP
which can increase the CPU occupancy when running the test suite in
parallel on a many-core system, resulting in earlier completion.
which can increase the CPU occupancy when running the test suite in
parallel on a many-core system, resulting in earlier completion.
add note on how to write NEXTKEY when you can't just wrap around each()
Revert "op.h: Restrict to core certain internal symbols"
This reverts commit 1d6cadf136bf2c85058a5359fb48b09b3ea9fe6f.
Due to cpan breakage: GH #18374 #18375 #18376
This reverts commit 1d6cadf136bf2c85058a5359fb48b09b3ea9fe6f.
Due to cpan breakage: GH #18374 #18375 #18376
add more win32 stat tests
These tickets were suggested as fixed by the stat updates, some
were fixed, but some weren't.
Add tests (TODO for the unfixed) to help track them
These tickets were suggested as fixed by the stat updates, some
were fixed, but some weren't.
Add tests (TODO for the unfixed) to help track them
cpan/Encode: sync with CPAN version 3.08
t/harness: Add option for faster test suite execution
This commit adds an environment variable, PERL_TEST_HARNESS_ASAP, which
if set to non-zero increases the parallelism in the execution of the
test suite, speeding it up on systems with multiple cores.
Normally, there are two main test sections, one for core and the second
for non-core tests, and the testing of the non-core one doesn't begin
until the core tests are complete. Within each section, there are a
number of test categories, like 're' for regular expressions, and
'JSON::PP' for the pure perl implementation of JSON.
Within each category, there are various single .t test files. Some
categories can have those be tested in parallel; some require them to be
done in a particular order, say because an earlier .t does setup for
subsequent ones. We already have this capability.
Completion of all the tests in a category is not needed before those of
another category can be started. This is how it already works.
However, the core section categories are ordered so that they begin in a
logical order for someone trying to get perl to work. First to start
are the basic sanity tests, then by roughly decreasing order of
widespread use in perl programs in the wild, with the final two
categories, porting and perf, being mainly of use to perl5 porters.
These two categories aren't started until all the tests in the earlier
categories are started. We have some long running tests in those two
categories, and generally they delay the start of the entire second
section.
If those long running tests could be started sooner, shorter tests in
the first section could be run in parallel with them, increasing the
average CPU utilization, and the second section could begin (and hence
end) earlier, shortening the total elapsed execution time of the entire
suite.
The second section has some very long running tests. JSON-PP is one of
them. If it could run in parallel with tests from the first section,
that would also speed up the completion of the suite.
The environment variable added by this commit does both things. The
basic sanity test categories in the first section continue to be started
before anything else. But then all other tests are run in decreasing
order of elapsed time they take to run, removing the boundaries between
some categories, and between the two sections.
The gain from this increases as the number of jobs run in parallel does;
slower high core platforms have the highest increase. On the old
dromedary with 24 cores, the gain is 20%, almost 2 minutes. On my more
modern box with 12 cores, it is 8%.
This commit adds an environment variable, PERL_TEST_HARNESS_ASAP, which
if set to non-zero increases the parallelism in the execution of the
test suite, speeding it up on systems with multiple cores.
Normally, there are two main test sections, one for core and the second
for non-core tests, and the testing of the non-core one doesn't begin
until the core tests are complete. Within each section, there are a
number of test categories, like 're' for regular expressions, and
'JSON::PP' for the pure perl implementation of JSON.
Within each category, there are various single .t test files. Some
categories can have those be tested in parallel; some require them to be
done in a particular order, say because an earlier .t does setup for
subsequent ones. We already have this capability.
Completion of all the tests in a category is not needed before those of
another category can be started. This is how it already works.
However, the core section categories are ordered so that they begin in a
logical order for someone trying to get perl to work. First to start
are the basic sanity tests, then by roughly decreasing order of
widespread use in perl programs in the wild, with the final two
categories, porting and perf, being mainly of use to perl5 porters.
These two categories aren't started until all the tests in the earlier
categories are started. We have some long running tests in those two
categories, and generally they delay the start of the entire second
section.
If those long running tests could be started sooner, shorter tests in
the first section could be run in parallel with them, increasing the
average CPU utilization, and the second section could begin (and hence
end) earlier, shortening the total elapsed execution time of the entire
suite.
The second section has some very long running tests. JSON-PP is one of
them. If it could run in parallel with tests from the first section,
that would also speed up the completion of the suite.
The environment variable added by this commit does both things. The
basic sanity test categories in the first section continue to be started
before anything else. But then all other tests are run in decreasing
order of elapsed time they take to run, removing the boundaries between
some categories, and between the two sections.
The gain from this increases as the number of jobs run in parallel does;
slower high core platforms have the highest increase. On the old
dromedary with 24 cores, the gain is 20%, almost 2 minutes. On my more
modern box with 12 cores, it is 8%.
TAP::Harness: Move timer initialization
This commit adds to blead the accepted PR
https://github.com/Perl-Toolchain-Gang/Test-Harness/pull/98
but the updated module has not been released.
This commit allows a many-core processor to run the Perl test suite more
efficiently.
Prior to this commit, the timers for counting elapsed time and CPU usage
were begun when a job's first output appears. This yields inaccurate
results. These results are saved in t/test_state for future runs so
that they can start the longest-running tests first, which leads to
using the available cores more efficiently. (If you start a long running
test after everything else is nearly done, you have to wait for it to
finish before the suite as a whole is; if you start the long ones first,
and the shortest last, you don't have to wait very long for any
stragglers to complete.) Inaccurate results here lead to this
situation, which we were often seeing in the podcheck.t test.
The worst case is if there is heavy computation at the beginning of the
test being run. podcheck, for example, examines all the pods in the
directory structure to find which links to other pods do or do not have
corresponding anchors. Output doesn't happen until the analysis is
complete. On my system, this takes over 30 seconds, but prior to this
commit, what was noted was just the time required to do the output,
about 200 milliseconds. The result was that podcheck was viewed as
being one of the shortest tests run, so was started late in the process,
and generally held up the completion of it.
This commit by itself doesn't improve the test completion very much,
because, test tests are run a whole directory at a time, and the
directory podcheck is in, for example, is run last. The next commit
addresses that.
This commit adds to blead the accepted PR
https://github.com/Perl-Toolchain-Gang/Test-Harness/pull/98
but the updated module has not been released.
This commit allows a many-core processor to run the Perl test suite more
efficiently.
Prior to this commit, the timers for counting elapsed time and CPU usage
were begun when a job's first output appears. This yields inaccurate
results. These results are saved in t/test_state for future runs so
that they can start the longest-running tests first, which leads to
using the available cores more efficiently. (If you start a long running
test after everything else is nearly done, you have to wait for it to
finish before the suite as a whole is; if you start the long ones first,
and the shortest last, you don't have to wait very long for any
stragglers to complete.) Inaccurate results here lead to this
situation, which we were often seeing in the podcheck.t test.
The worst case is if there is heavy computation at the beginning of the
test being run. podcheck, for example, examines all the pods in the
directory structure to find which links to other pods do or do not have
corresponding anchors. Output doesn't happen until the analysis is
complete. On my system, this takes over 30 seconds, but prior to this
commit, what was noted was just the time required to do the output,
about 200 milliseconds. The result was that podcheck was viewed as
being one of the shortest tests run, so was started late in the process,
and generally held up the completion of it.
This commit by itself doesn't improve the test completion very much,
because, test tests are run a whole directory at a time, and the
directory podcheck is in, for example, is run last. The next commit
addresses that.
fixup! Add Sevan Janiyan as author
Add Sevan Janiyan as author
Detect GCC as compiler to use
On Illumos based distributions GCC is likely the compiler available on the system.
Change tested on SmartOS
On Illumos based distributions GCC is likely the compiler available on the system.
Change tested on SmartOS
perlxs: Note that rpc.h is can be in different places
This replaces PR #18247
This replaces PR #18247
fix the results of my stupidity
I added these definitions late in the process, thinking I hadn't
already added them, but I had.
I added these definitions late in the process, thinking I hadn't
already added them, but I had.
Storable: t/canonical.t: avoid stderr noise
informational text should to stdout, not stderr
informational text should to stdout, not stderr
POSIX: t/posix.t: avoid warning
Since warnings were enabled in this test file, skip one spurious warning
being generated. S_ISBLK() is being called purely to test run-time
loading; so it's being called without an arg, which now triggers an
'uninitialized value' warning.
Since warnings were enabled in this test file, skip one spurious warning
being generated. S_ISBLK() is being called purely to test run-time
loading; so it's being called without an arg, which now triggers an
'uninitialized value' warning.
Unicode-Normalize/Makefile.PL: avoid stderr
During build, output general progress information to stdout, not stderr.
During build, output general progress information to stdout, not stderr.
append colon to USE_STRICT_BY_DEFAULT description
This stops autodoc.pl complaining that:
USE_STRICT_BY_DEFAULT has no documentation
This stops autodoc.pl complaining that:
USE_STRICT_BY_DEFAULT has no documentation
ODBM_File.xs: silence -Wc++-compat warning
Under gcc -Wc++-compat, it warns that 'delete' is a keyword. Since this
is the name of the actual function in odbm, just temporarily disable
the warning.
Under gcc -Wc++-compat, it warns that 'delete' is a keyword. Since this
is the name of the actual function in odbm, just temporarily disable
the warning.
Opcode.xs: fix compiler warning
In some debugging code it was doing a SAVEDESTRUCTOR()
to do a warn() on scope exit, but it should have used the nocontext
version of warn().
In some debugging code it was doing a SAVEDESTRUCTOR()
to do a warn() on scope exit, but it should have used the nocontext
version of warn().
Implement symlink(), lstat() and readlink() on Win32
win32 symlink: reindent
win32 symlink: treats paths that look like directories as directories
Test-Harness: don't assume symlink succeeds
https://github.com/Perl-Toolchain-Gang/Test-Harness/pull/103
upstream which has been applied but not released.
https://github.com/Perl-Toolchain-Gang/Test-Harness/pull/103
upstream which has been applied but not released.
t/op/taint.t: handle symlink requiring anything unavailable
like privileges, or a filesystem without symlink support
like privileges, or a filesystem without symlink support
Win32: try to make the new stat pre-Vista compatible
Skips the win32\stat.t execute flag test for handles pre-Vista
This is intended mostly for allowing the Win2000 smoker to build and
test. If we end up dropping pre-Vista support this commit can be
removed (or reverted if it ends up in blead)
Skips the win32\stat.t execute flag test for handles pre-Vista
This is intended mostly for allowing the Win2000 smoker to build and
test. If we end up dropping pre-Vista support this commit can be
removed (or reverted if it ends up in blead)
pre-vista support for win32_symlink
Win32: don't include version specific config for prebuilt config_h.*
This fixes the problem where doing a regen_config_h with a compiler
that supports stdbool.h would generate a config_h.* that would
result in a build failure on older compilers that didn't support
stdbool.h.
This fixes the problem where doing a regen_config_h with a compiler
that supports stdbool.h would generate a config_h.* that would
result in a build failure on older compilers that didn't support
stdbool.h.
lstat(), readlink() and unlink() treat directory junctions as symlinks
remove ${^WIN32_SLOPPY_STAT}
The new implementation, like the UCRT implementation, always
opens the specified file.
The new implementation, like the UCRT implementation, always
opens the specified file.
win32 symlink: only use the unprivileged flag if windows is new enough
Win32: re-work FILETIME <=> time_t conversions
Current versions of Windows claim to support leap seconds, but the
time conversion I was using ignores that possibility.
Switch to using APIs (FileTimeToSystemTime() and SystemTimeToFileTime())
that are documented to support leap seconds that might be included
in a FILETIME.
Current versions of Windows claim to support leap seconds, but the
time conversion I was using ignores that possibility.
Switch to using APIs (FileTimeToSystemTime() and SystemTimeToFileTime())
that are documented to support leap seconds that might be included
in a FILETIME.
File::Copy: support symlinks on Win32
File::Find: support Win32 symlinks
find.t, taint.t: check that symlink() works under the current
permissions/filesystem rather than assuming it will work
find.t: since symlinks are now available, an earlier test block
set $FileFileTests_OK, and the tests in this Win32 block don't use
either of the follow options, which is required for fast file tests.
taint.t: ensure we get "/" separated names to match File::Find's output
find.t, taint.t: check that symlink() works under the current
permissions/filesystem rather than assuming it will work
find.t: since symlinks are now available, an earlier test block
set $FileFileTests_OK, and the tests in this Win32 block don't use
either of the follow options, which is required for fast file tests.
taint.t: ensure we get "/" separated names to match File::Find's output
File::Find find.t: switch to done_testing()
PathTools: use PerlLIO_*() functions and chdir() on a symlink differences
Use PerlLIO_lstat() and PerlLIO_readlink() instead of directly calling
the POSIX names, so our Win32 overrides work.
For the test, unlike POSIX, changing directory via a symlink on Win32
appears to store the symlink as part of the current directory rather
so GetCurrentDirectory() fetches that rather than the hardlinked path.
Use PerlLIO_lstat() and PerlLIO_readlink() instead of directly calling
the POSIX names, so our Win32 overrides work.
For the test, unlike POSIX, changing directory via a symlink on Win32
appears to store the symlink as part of the current directory rather
so GetCurrentDirectory() fetches that rather than the hardlinked path.
Win32: implement our own stat(), and hence our own utime
This fixes at least two problems:
- unlike UCRT, the MSVCRT used for gcc builds has a bug converting
a FILETIME in an unlike current DST state, returning a time
offset by an hour. Fixes GH #6080
- the MSVCRT apparently uses FindFirstFile() to fetch file
information, but this doesn't follow symlinks(), so stat()
ends up returning information about the symlink(), not the
underlying file. This isn't an issue with the UCRT which
opens the file as this implementation does.
Currently this code calculates the time_t for st_*time, and the
other way for utime() using a simple multiplication and offset
between time_t and FILETIME values, but this may be incorrect
if leap seconds are enabled.
This code also requires Vista or later.
Some of this is based on code by Tomasz Konojacki (xenu).
This fixes at least two problems:
- unlike UCRT, the MSVCRT used for gcc builds has a bug converting
a FILETIME in an unlike current DST state, returning a time
offset by an hour. Fixes GH #6080
- the MSVCRT apparently uses FindFirstFile() to fetch file
information, but this doesn't follow symlinks(), so stat()
ends up returning information about the symlink(), not the
underlying file. This isn't an issue with the UCRT which
opens the file as this implementation does.
Currently this code calculates the time_t for st_*time, and the
other way for utime() using a simple multiplication and offset
between time_t and FILETIME values, but this may be incorrect
if leap seconds are enabled.
This code also requires Vista or later.
Some of this is based on code by Tomasz Konojacki (xenu).
Win32: implement symlink() and readlink()
The API used requires Windows Vista or later.
The API itself requires either elevated privileges or a sufficiently
recent version of Windows 10 running in "Developer Mode", so some
tests require updates.
The API used requires Windows Vista or later.
The API itself requires either elevated privileges or a sufficiently
recent version of Windows 10 running in "Developer Mode", so some
tests require updates.
Win32: add lstat(), fetch st_dev and st_ino and fetch st_nlink for fstat
We need lstat() for various modules to work well with symlinks,
and the same modules often want to check for matches on the device
and inode number.
The values we're using for st_ino match those that the Python and Rust
libraries use, and Go uses the same volume and file index values for
testing if two stat objects refer to the same file.
They aren't entirely unique, given ReFS uses 128-bit file ids, but
the API used to check for this (GetFileInformationByHandleEx() for
FileIdInfo) is only available on server operating systems, so I can't
directly test it anyway.
We need lstat() for various modules to work well with symlinks,
and the same modules often want to check for matches on the device
and inode number.
The values we're using for st_ino match those that the Python and Rust
libraries use, and Go uses the same volume and file index values for
testing if two stat objects refer to the same file.
They aren't entirely unique, given ReFS uses 128-bit file ids, but
the API used to check for this (GetFileInformationByHandleEx() for
FileIdInfo) is only available on server operating systems, so I can't
directly test it anyway.
Account for 'less' reserving an extra column
After decades of stability, the 'less' pager project decided to claim an
extra column for its own use when called with certain common options.
This commit changes some of the auto-generating tools to wrap one column
earlier to compensate, and changes podcheck to also whine on wide
verbatim text one column less. But it changes the podcheck data base
to grandfather-in all the many existing places that exceed that amount.
That means only changes made to pods after this commit will be held to
the stricter value.
Of course, what this means is those pods will wrap or truncate in these
places on an 80 column window, making them harder to read, when used
with 'less' and when it is called with the options that reserve those
two columns. Patches welcome.
I haven't seen the wrapping problem with perldoc, and haven't
investigated much.
After decades of stability, the 'less' pager project decided to claim an
extra column for its own use when called with certain common options.
This commit changes some of the auto-generating tools to wrap one column
earlier to compensate, and changes podcheck to also whine on wide
verbatim text one column less. But it changes the podcheck data base
to grandfather-in all the many existing places that exceed that amount.
That means only changes made to pods after this commit will be held to
the stricter value.
Of course, what this means is those pods will wrap or truncate in these
places on an 80 column window, making them harder to read, when used
with 'less' and when it is called with the options that reserve those
two columns. Patches welcome.
I haven't seen the wrapping problem with perldoc, and haven't
investigated much.
Document various CopFILEfoo functions
opcode.h: Restrict scope of internal variables to core
Document SvSHARED_HASH
bump version of ExtUtils::ParseXS
restore compatibility with old versions of ExtUtils::ParseXS
ExtUtils::ParseXS used to include a function called "errors", which was
documented. In was renamed to report_error_count in version 3.01 (perl
5.15.1) although the documentation wasn't fixed until 3.21 (perl 5.19.2).
As a documented function, this is a backwards compatibility issue.
It is possible for this to lead to errors when installing modules from
CPAN. If you are using the version of ExtUtils::ParseXS that comes with
core, between running the Makefile.PL and make, fulfilling prereqs can
result in upgrading ExtUtils::ParseXS. When Makefile.PL is run, the
generated Makefile gets the full path to xsubpp saved in it. Then when
upgraded from CPAN, ExtUtils::ParseXS and xsubpp will be in a new
location (site_perl or a local::lib). Running make will run the old
xsubpp, but it will then try to use the new ExtUtils::ParseXS which has
broken compatibility.
Restore the errors function as a compatibility shim to fix this.
ExtUtils::ParseXS used to include a function called "errors", which was
documented. In was renamed to report_error_count in version 3.01 (perl
5.15.1) although the documentation wasn't fixed until 3.21 (perl 5.19.2).
As a documented function, this is a backwards compatibility issue.
It is possible for this to lead to errors when installing modules from
CPAN. If you are using the version of ExtUtils::ParseXS that comes with
core, between running the Makefile.PL and make, fulfilling prereqs can
result in upgrading ExtUtils::ParseXS. When Makefile.PL is run, the
generated Makefile gets the full path to xsubpp saved in it. Then when
upgraded from CPAN, ExtUtils::ParseXS and xsubpp will be in a new
location (site_perl or a local::lib). Running make will run the old
xsubpp, but it will then try to use the new ExtUtils::ParseXS which has
broken compatibility.
Restore the errors function as a compatibility shim to fix this.
cop.h: Extend core-only portion
This encloses some #defines in a PERL_CORE section, as their only use is
in the macro immediately following, already confined to core.
This encloses some #defines in a PERL_CORE section, as their only use is
in the macro immediately following, already confined to core.
INSTALL: Fix grammar/typos
perlapi: Consolidate svREFCNT_dec-ish entries
DynaLoader: use PerlEnv_getenv()
Doing so invokes thread-safe guards
Doing so invokes thread-safe guards
op.h: Restrict to core certain internal symbols
so that they aren't accessible to XS code and won't be picked up by
autodoc
so that they aren't accessible to XS code and won't be picked up by
autodoc
perlapi: Consolidate SvPVX-ish entries
Add -negative import args for 'use warnings'
perlapi: Consolidate SvREFCNT_INC-ish entries
add extra language in the quotemeta() docs for embedded \ and $
One paragraph was lifted from perlop.pod, and the other from perlre.pod.
One paragraph was lifted from perlop.pod, and the other from perlre.pod.
perlvar - clarify that paragraph mode also discards a single leading newline
Avoid deadlock with PERL_MEM_LOG
This fixes GH #18341
The Perl wrapper for getenv() was changed in 5.32 to allocate memory to
squirrel safely away the result of the wrapped getenv() call. It does
this while in a critical section so as to make sure another thread can't
interrupt it and destroy it.
Unfortunately, when Perl is compiled for debugging memory problems and
has PERL_MEM_LOG enabled, that allocation causes a recursive call to
getenv() for the purpose of checking an environment variable to see how
to log that allocation. And hence it deadlocks trying to enter the
critical section.
There are various solutions. One is to use or emulate a general semaphore
instead of a binary one. This is effectively what
PL_lc_numeric_mutex_depth does for another mutex, and the code for that
could be used as a template.
But given that this is an extreme edge case which requires Perl to be
specially compiled to enable this feature which is used only for
debugging, a much simpler, if less safe if it were to ever be used in
production, solution should suffice. Tony Cook suggested just avoiding
the wrapper for this particular purpose.
This fixes GH #18341
The Perl wrapper for getenv() was changed in 5.32 to allocate memory to
squirrel safely away the result of the wrapped getenv() call. It does
this while in a critical section so as to make sure another thread can't
interrupt it and destroy it.
Unfortunately, when Perl is compiled for debugging memory problems and
has PERL_MEM_LOG enabled, that allocation causes a recursive call to
getenv() for the purpose of checking an environment variable to see how
to log that allocation. And hence it deadlocks trying to enter the
critical section.
There are various solutions. One is to use or emulate a general semaphore
instead of a binary one. This is effectively what
PL_lc_numeric_mutex_depth does for another mutex, and the code for that
could be used as a template.
But given that this is an extreme edge case which requires Perl to be
specially compiled to enable this feature which is used only for
debugging, a much simpler, if less safe if it were to ever be used in
production, solution should suffice. Tony Cook suggested just avoiding
the wrapper for this particular purpose.
Add mutex locking for many-reader/1-writer
The mutex macros already in perl are sufficient to allow us to emulate
this type of locking, which may also be available natively, but I don't
think it is worth the effort to use the native calls.
The mutex macros already in perl are sufficient to allow us to emulate
this type of locking, which may also be available natively, but I don't
think it is worth the effort to use the native calls.
locale.c: Move comment to better place
perlsub - indicate version requirement for "delete local"
perlapi: PL_sv_yes and kin are read-only
perlapi: Remove per-thread section; move to real scns
Instead of having a grab bag section of all interpreter variables, move
their documentation to the section that they actually fit under.
Instead of having a grab bag section of all interpreter variables, move
their documentation to the section that they actually fit under.
perlapi: Move PL_dowarn to Warnings section
perldelta updates for the SysV IPC changes
Various updates and fixes to some of the SysV IPC ops and their tests
io/shm.t: make runnable as ./perl io/shm.t
and give editors a hint
and give editors a hint
shmwrite: treat the string as bytes
msgrcv: properly downgrade the receive buffer
If the receive buffer started with SVf_UTF8 on, the received message
SV would stay flagged, corrupting the result.
If the receive buffer started with SVf_UTF8 on, the received message
SV would stay flagged, corrupting the result.
msgsnd: handle an upgraded MSG parameter correctly
perlfunc/msgsnd: the supplied MSG doesn't have a length field
The length of the message is derived from the length of the MSG
less the size of the type field.
The length of the message is derived from the length of the MSG
less the size of the type field.
fix UTF-8 handling for semop()
As with semctl(), the UTF-8 flag on the passed in opstring was ignored,
which meant that the upgraded version of the same string would
cause an error.
Just use SvPVbyte().
As with semctl(), the UTF-8 flag on the passed in opstring was ignored,
which meant that the upgraded version of the same string would
cause an error.
Just use SvPVbyte().
io/sem.t: eliminate warnings
This eliminates some warnings that semctl() (or other *ctl()) calls
might generate, and some warnings specific to io/sem.t:
- for IPC_STAT and GETALL, the current value of ARG is overwritten
so making an undefined value warning for it nonsensical, so don't
use SvPV_force().
- for other calls, ARG is either ignored, or in a behaviour
introduced in perl 3 (along with the ops), treats the supplied
value as an integer which is then converted to a pointer. Rather
than warning on an undef value which is most likely to be ignored
we treat the undef as zero without the usual warning.
- always pass a number for SEMNUM in the test code
I didn't try to eliminate warning for non-numeric/undefined SEMNUM,
since while we know it isn't used by SETALL, GETALL, IPC_STAT and
IPC_SET, it may or may not be used by system defined *ctl() operators
such as SEM_INFO and SHM_LOCK on Linux.
fixes #17926
This eliminates some warnings that semctl() (or other *ctl()) calls
might generate, and some warnings specific to io/sem.t:
- for IPC_STAT and GETALL, the current value of ARG is overwritten
so making an undefined value warning for it nonsensical, so don't
use SvPV_force().
- for other calls, ARG is either ignored, or in a behaviour
introduced in perl 3 (along with the ops), treats the supplied
value as an integer which is then converted to a pointer. Rather
than warning on an undef value which is most likely to be ignored
we treat the undef as zero without the usual warning.
- always pass a number for SEMNUM in the test code
I didn't try to eliminate warning for non-numeric/undefined SEMNUM,
since while we know it isn't used by SETALL, GETALL, IPC_STAT and
IPC_SET, it may or may not be used by system defined *ctl() operators
such as SEM_INFO and SHM_LOCK on Linux.
fixes #17926
*ctl: test that we throw on a code point above 0xff
These functions expect a packed structure of some point
representing bytes from the structure in memory.
These functions expect a packed structure of some point
representing bytes from the structure in memory.
*ctl: test we handle the buffer as bytes
Previously this had the "unicode bug", an upgraded string would
be treated as the encoding of that string, rather than the raw
bytes.
Previously this had the "unicode bug", an upgraded string would
be treated as the encoding of that string, rather than the raw
bytes.
*ctl: ensure the ARG parameter's UTF-8 flag is reset
If the SV supplied as ARG had the SVf_UTF8 flag on it would be left
on, which would effectively corrupt the returned buffer.
Only tested with shmctl(), since the other *ctl() functions only have
more complex structures with indeterminate types that would require
more effort to test.
If the SV supplied as ARG had the SVf_UTF8 flag on it would be left
on, which would effectively corrupt the returned buffer.
Only tested with shmctl(), since the other *ctl() functions only have
more complex structures with indeterminate types that would require
more effort to test.
perl - update usage data to match perlrun
John Karr is now a perl author
fix typo in comp/parser.t
3 similar tests eval a sub with a list of variables, $r is repeated at the
end of the list, but the errors that are being checked have nothing to do with
the repeated variable. This causes a warning enabled.
3 similar tests eval a sub with a list of variables, $r is repeated at the
end of the list, but the errors that are being checked have nothing to do with
the repeated variable. This causes a warning enabled.
comp/parser.t count two lines that were being tested to see if they crashed
parser as tests (PASS if the test file is still running after the lines).
parser as tests (PASS if the test file is still running after the lines).
bump $Carp::VERSION
fix context of caller call in Carp
Carp's CARP_NOT variable is meant to have package names. caller in list
context returns the calling file and line in addition to the package
name.
Enforce scalar context on the call to caller to fix this.
Carp's CARP_NOT variable is meant to have package names. caller in list
context returns the calling file and line in addition to the package
name.
Enforce scalar context on the call to caller to fix this.
perlapi: Document UVf, as deprecated
perlapi: Note proper rplcemnt for pad_compname_type
add a brief introduction to the IO SV type
Confine scope of SV_CONST to core
as well as the constants it uses. This is unused in cpan
as well as the constants it uses. This is unused in cpan
Add a usage note about the "l" modifier.
win32: remove support for disabling USE_LARGE_FILES
It was enabled by default on all compilers. I don't think it ever
makes sense to disable it.
It was enabled by default on all compilers. I don't think it ever
makes sense to disable it.
Restrict scope/Shorten some very long macro names
The names were intended to force people to not use them outside their
intended scopes. But by restricting those scopes in the first place, we
don't need such unwieldy names
The names were intended to force people to not use them outside their
intended scopes. But by restricting those scopes in the first place, we
don't need such unwieldy names
embed.fnc: Mark reginitcolors as Core only
This is used for internal initialization, and there are no uses on cpan
This is used for internal initialization, and there are no uses on cpan
perlapi: Consolidate Sv{INU]VX-ish entries
Update gitignore files to reflect files in repo
fix splittree.pl ignore to only apply to root
There is a real splittree.pl in NetWare/, which may be copied to the
root. Ignore the file in the root, but not the file in NetWare/.
There is a real splittree.pl in NetWare/, which may be copied to the
root. Ignore the file in the root, but not the file in NetWare/.
remove ignore for perlvms.pod, which is a real file now
move ignore for re into its own dists gitignore
move ignore for XS-APItest into dists own gitignore
remove ignore for Test-Harness directory which no longer exists
remove ignore for dl_win32.xs, since it is a real file now
add gitignore exclusions for files in git
There are a number of files excluded using gitignore rules that are
included in the repository. This can lead to confusion if something
other than git tries to read the ignore files.
Add rules to the gitignore files so that these files won't be ignored.
There are a number of files excluded using gitignore rules that are
included in the repository. This can lead to confusion if something
other than git tries to read the ignore files.
Add rules to the gitignore files so that these files won't be ignored.
sv.h: Add comments
regcharclass.h: Simplify some expressions
The regen script was improperyly collapsing two-element ranges into two
separate elements, which caused extraneous code to be generated.
The regen script was improperyly collapsing two-element ranges into two
separate elements, which caused extraneous code to be generated.
Slience compiler warnings for NV, [IU]V compare
These were occurring on FreeBSD smokes.
warning: implicit conversion from 'IV' (aka 'long') to 'double' changes value from 9223372036854775807 to 9223372036854775808 [-Wimplicit-int-float-conversion]
9223372036854775807 is IV_MAX. What needed to be done here was to use
the NV containing IV_MAX+1, a value that already exists in perl.h
In other instances, simply casting to an NV before doing the comparison
with the NV was what was needed.
This fixes #18328
These were occurring on FreeBSD smokes.
warning: implicit conversion from 'IV' (aka 'long') to 'double' changes value from 9223372036854775807 to 9223372036854775808 [-Wimplicit-int-float-conversion]
9223372036854775807 is IV_MAX. What needed to be done here was to use
the NV containing IV_MAX+1, a value that already exists in perl.h
In other instances, simply casting to an NV before doing the comparison
with the NV was what was needed.
This fixes #18328
perlapi: Consolidate sv_vsetpvf-ish entries