PHP: Hypertext PreprocessorPHP 7.3.14 Released (23.1.2020, 00:00 UTC)
The PHP development team announces the immediate availability of PHP 7.3.14. This is a security release which also contains several bug fixes.All PHP 7.3 users are encouraged to upgrade to this version.For source downloads of PHP 7.3.14 please visit our downloads page, Windows source and binaries can be found on windows.php.net/download/. The list of changes is recorded in the ChangeLog.
Link
PHP: Hypertext PreprocessorPHP 7.4.2 Released (23.1.2020, 00:00 UTC)
PHP 7.4.2 Release AnnouncementThe PHP development team announces the immediate availability of PHP 7.4.2. This is a security release which also contains several bug fixes.All PHP 7.4 users are encouraged to upgrade to this version.For source downloads of PHP 7.4.2 please visit our downloads page, Windows source and binaries can be found on windows.php.net/download/. The list of changes is recorded in the ChangeLog.
Link
PHP: Hypertext PreprocessorPHP 7.2.27 Released (23.1.2020, 00:00 UTC)
The PHP development team announces the immediate availability of PHP 7.2.27. This is a security release.All PHP 7.2 users are encouraged to upgrade to this version.For source downloads of PHP 7.2.27 please visit our downloads page, Windows source and binaries can be found on windows.php.net/download/. The list of changes is recorded in the ChangeLog.
Link
Voices of the ElePHPantWhat Conference Organizers Wish Speakers Knew (22.1.2020, 19:06 UTC)
Link
Voices of the ElePHPantInterview with John P Bloch (21.1.2020, 12:30 UTC) Link
Voices of the ElePHPantWhat Conference Organizers Wish Speakers Knew (14.1.2020, 16:33 UTC)
Link
SitePoint PHP4 Reasons to Use Image Processing to Optimize Website Media (10.1.2020, 17:00 UTC)
4 Reasons to Use Image Processing to Optimize Website Media

This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

Image optimization is a big deal when it comes to website performance. You might be wondering if you’re covering all the bases by simply keeping file size in check. In fact, there’s a lot to consider if you truly want to optimize your site’s images.

Fortunately, there are image processing tools and content delivery networks (CDNs) available that can handle all the complexities of image optimization. Ultimately, these services can save you time and resources, while also covering more than one aspect of optimization.

In this article, we’ll take a look at the impact image optimization can have on site performance. We’ll also go over some standard approaches to the problem, and explore some more advanced image processing options. Let’s get started!

Why Skimping on Image Optimization Can Be a Performance Killer

If you decide not to optimize your images, you’re essentially tying a very heavy weight to all of your media elements. All that extra weight can drag your site down a lot. Fortunately, optimizing your images trims away the unnecessary data your images might be carrying around.

If you’re not sure how your website is currently performing, you can use an online tool to get an overview.

Results of a website speed test

Once you have a better picture of what elements on your website are lagging or dragging you down, there are a number of ways you can tackle image optimization specifically, including:

  • Choosing appropriate image formats. There are a number of image formats to choose from, and they each have their strengths and weaknesses. In general, it’s best to stick with JPEGs for photographic images. For graphic design elements, on the other hand, PNGs are typically superior to GIFs. Additionally, new image formats such as Google’s WebP have promising applications, which we’ll discuss in more detail later on.
  • Maximizing compression type. When it comes to compression, the goal is to get each image to its smallest “weight” without losing too much quality. There are two kinds of compression that can do that: “lossy” and “lossless”. A lossy image will look similar to the original, but with some decrease in quality, whereas a lossless image is nearly indistinguishable from the original but also heavier.
  • Designing with the image size in mind. If you’re working with images that need to display in a variety of sizes, it’s best to provide all the sizes you’ll need. If your site has to resize them on the fly, that can negatively impact speeds.
  • Exploring delivery networks. CDNs can be a solution to more resource-heavy approaches for managing media files. A CDN can handle all of your image content, and respond to a variety of situations to deliver the best and most optimized files.

As with any technical solution, you’ll have to weigh the pros and cons of each approach. However, it’s also worth noting that these more traditional approaches aren’t the only options you have available to you.

4 Reasons to Use Image Processing for Optimizing Your Website’s Media

As we

Truncated by Planet PHP, read more at the original (another 2853 bytes)

Link
Derick RethansXdebug Update: December 2019 (7.1.2020, 09:52 UTC)

Xdebug Update: December 2019

Another month, another monthly update where I explain what happened with Xdebug development in this past month. It will be published on the first Tuesday after the 5th of each month. Patreon supporters will get it earlier, on the first of each month. You can become a patron here to support my work on Xdebug. If you are leading a team or company, then it is also possible to support Xdebug through a subscription.

In December, I worked on Xdebug for near 50 hours, on the following things:

Xdebug 2.9.0

After releasing Xdebug 2.8.1, which I mentioned in last month's update, at the start of the month, more users noticed that although I had improved code coverage speed compared to Xdebug 2.8.0, it was still annoyingly slow. Nikita Popov, one of the PHP developers, provided me with a new idea on how to approach trying to find out which classes and functions still had to be analysed. He mentioned that classes and functions are always added to the end of the class/function tables, and that they are never removed either. This resulted in a patch, where the algorithm to find out whether a class/function still needs to be analysed changed from from O(n²) to approximately O(n). You can read more about this in the article that I wrote about it. A few other issues were addressed in Xdebug 2.9.0 as well.

Breakpoint Resolving

In the May update I wrote about resolving breakpoints. This feature will try to make sure that whenever you set a breakpoint, Xdebug makes sure that it also breaks. However, there are currently two issues with this: 1. breaks happen more often than expected, and 2. the algorithm to find lines is really slow. I am addressing both these problems by using a similar trick to the one Nikita suggested for speeding up code coverage analysis. This work requires quite a bit of rewrites of the breakpoint resolving function, and hence this is ongoing. I expect this to cumulate in an Xdebug 2.9.1 release during January.

debugclient and DBGp Proxy

I have wanted to learn Go for a while, and in order to get my feet wet I started implementing Xdebug's bundled debugclient in Go, and at the same time create a library to handle the DBGp protocol.

The main reason why a rewrite is useful, is that the debugclient as bundled with Xdebug no longer seems to work with libedit any more. This makes using debugclient really annoying, as I can't simply use the up and down arrows to scroll through my command history. I primarily use the debugclient to test the DBGp protocol, without an IDE "in the way".

The reason to write a DGBp library is that there are several implementations of a DBGp proxy. It is unclear as to whether they actually implement the protocol, or just do something that "works". I will try to make the DBGp proxy that I will be working on stick to the protocol exactly, which might require changes to IDEs who implement it against an non compliant one (Komodo's pydbgpproxy seems to be one of these).

This code is currently not yet open source, mostly because I am still finding my feet with Go. I expect to release parts of this on the way to Xdebug 3.0.

Business Supporter Scheme and Funding

Support through the Business Supporter Scheme continues to trickle in.

This month's new supporter is Stratege

Truncated by Planet PHP, read more at the original (another 913 bytes)

Link
Matthias NobackRules for working with dynamic arrays and custom collection classes (6.1.2020, 10:20 UTC)

Here are some rules I use for working with dynamic arrays. It's pretty much a Style Guide for Array Design, but it didn't feel right to add it to the Object Design Style Guide, because not every object-oriented language has dynamic arrays. The examples in this post are written in PHP, because PHP is pretty much Java (which might be familiar), but with dynamic arrays instead of built-in collection classes and interfaces.

Using arrays as lists

All elements should be of the same type

When using an array as a list (a collection of values with a particular order), every value should be of the same type:

$goodList = [
    'a',
    'b'
];

$badList = [
    'a',
    1
];

A generally accepted style for annotating the type of a list is: @var array<TypeOfElement>. Make sure not to add the type of the index (which would always be int).

The index of each element should be ignored

PHP will automatically create new indexes for every element in the list (0, 1, 2, etc.) However, you shouldn't rely on those indexes, nor use them directly. The only properties of a list that clients should rely on is that it is iterable and countable.

So feel free to use foreach and count(), but don't use for to loop over the elements in a list:

// Good loop:
foreach ($list as $element) {
}

// Bad loop (exposes the index of each element):
foreach ($list as $index => $element) {
}

// Also bad loop (the index of each element should not be used):
for ($i = 0; $i < count($list); $i++) {
}

(In PHP, the for loop might not even work, because there may be indices missing in the list, and indices may be higher than the number of elements in the list.)

Instead of removing elements, use a filter

You may want to remove elements from a list by their index (unset()), but instead of removing elements you should use array_filter() to create a new list, without the unwanted elements.

Again, you shouldn't rely on the index of elements, so when using array_filter() you shouldn't use the flag parameter to filter elements based on the index, or even based on both the element and the index.

// Good filter:
array_filter(
    $list,
    function (string $element): bool {
        return strlen($element) > 2;
    }
);

// Bad filter (uses the index to filter elements as well)
array_filter(
    $list,
    function (int $index): bool {
        return $index > 3;
    },
    ARRAY_FILTER_USE_KEY
);

// Bad filter (uses both the index and the element to filter elements)
array_filter(
    $list,
    function (string $element, int $index): bool {
        return $index > 3 || $element === 'Include';
    },
    ARRAY_FILTER_USE_BOTH
);

Using arrays as maps

When keys are relevant and they are not indices (0, 1, 2, etc.). feel free to use an array as a map (a collection from which you can retrieve values by their unique key).

All the keys should be of the same type

The first rule for using arrays as maps is that all they keys in the array should be of the same type (most common are string-type keys).

$goodMap = [
    'foo' => 'bar',
    'bar' => 'baz'
];

// Bad (uses different types of keys)
$badMap = [
    'foo' => 'bar',
    1 => 'baz'
];

All the values should be of the same type

The same goes for the values in a map: they should be of the same type.

$goodMap = [
    'foo' => 'bar',
    'bar' => 'baz'
];

// Bad (uses different types of values)
$badMap = [
    'foo' => 'bar',
    'bar' => 1
];

A generally accepted style for annotating the type of a map is: @var array<TypeOfKey, TypeOfValue>.

Maps should remain private

Lists can safely be passed around from object to object, because of their simple characteristics. Any client can use it to loop over its elements, or count its elements, even if the list is empty. Maps are more difficult to work with, because clients may rely on keys that have no corresponding value. This means that in general, they should remain private to the object that manages them. Instea

Truncated by Planet PHP, read more at the original (another 8885 bytes)

Link
Evert PotPerformance testing HTTP/1.1 vs HTTP/2 vs HTTP/2 + Server Push for REST APIs (2.1.2020, 12:00 UTC)
<script src="/assets/js/request-simulator.js"/>

When building web services, a common wisdom is to try to reduce the number of HTTP requests to improve performance.

There are a variety of benefits to this, including less total bytes being sent, but the predominant reason is that traditionally browsers will only make 6 HTTP requests in parallel for a single domain. Before 2008, most browsers limited this to 2.

When this limit is reached, it means that browsers will have to wait until earlier requests are finished before starting new ones. One implication is that the higher the latency is, the longer it will take until all requests finish.

Take a look at an example of this behavior. In the following simulation we’re fetching a ‘main’ document. This could be the index of a website, or a some JSON collection.

After getting the main document, the simulator grabs 99 linked items. These could be images, scripts, or other documents from an API.

The 6 connection limit has resulted in a variety of optimization techniques. Scripts are combined and compressed, graphics are often combined into ‘sprite maps’.

The limit and ‘cost’ of a single HTTP connection also has had an effect on web services. Instead of creating small, specific API calls, designers of REST (and other HTTP-) services are incentivized to pack many logical ‘entities’ in a single HTTP request/response.

For example, when an API client needs a list of ‘articles’ from an API, usually they will get this list from a single endpoint instead of fetching each article by its own URI.

The savings are massive. The following simulation is similar to the last, except now we’ve combined all entities in a single request.

If an API client needs a specific (large) set of entities from a server, in order to reduce HTTP requests, API developers will be compelled to either build more API endpoints, each to give a result that is tailored to the specific use-case of the client or deploy systems that can take arbitrary queries and return all the matching entities.

The simplest form of this is perhaps a collection with many query parameters, and a much more complex version of this is GraphQL, which effectively uses HTTP as a pipe for its own request/response mechanism and allows for a wide range of arbitrary queries.

Drawbacks of compounding documents

There’s a number of drawbacks to this. Systems that require compounding of entities typically need additional complexity on both server and client.

Instead of treating a single entity as some object that has a URI, which can be fetched with GET and subsequently cached, a new layer is required on both server and client-side that’s responsible for teasing these entities apart.

Re-implementing the logic HTTP already provides also has a nasty side-effect that other features from HTTP must also be reimplemented. The most common example is caching.

On the REST-side of things, examples of compound-documents can be seen in virtually any standard. JSON:API, HAL and Atom all have this notion.

If you look at most full-featured JSON:API client implementations, you will usually see that these clients often ship with some kind of ‘entity store’, allowing it to keep track of which entities it received, effectively maintaining an equivalent of a HTTP cache.

Another issue is that for some of these systems, is that it’s typically harder for clients to just request the data they need. Since they are often combined in compound documents it’s all-or-nothing, or significant complexity on client and server (see GraphQL).

A more lofty drawback is that API designers may have trended towards systems that are more opaque, and are no longer part of the web of information due to a lack that interconnectedness that linking affords.

HTTP/2 and HTTP/3

HTTP/2 is now widely available. In HTTP/2 the cost of HTTP requests is significantly lower. Whereas with HTTP/1.1 it was required to open 1 TCP connection per request, with HTTP/2 1 connection is opened per domain. Many requests can flow through them in parallel, and potentially out of order.

Instead of delegating parallelism to compound documents, we can now actually rely on the protocol itself to handle this.

Using many HTTP/2 req

Truncated by Planet PHP, read more at the original (another 333361 bytes)

Link
LinksRSS 0.92   RDF 1.
Atom Feed   100% Popoon
PHP5 powered   PEAR
ButtonsPlanet PHP   Planet PHP
Planet PHP