SitePoint PHPImproving Performance Perception with Pingdom and GTmetrix (22.6.2018, 18:00 UTC)

This article is part of a series on building a sample application --- a multi-image gallery blog --- for performance benchmarking and optimizations. (View the repo here.)


In this article, we'll analyze our gallery application using the tools we explained in the previous guide, and we'll look at possible ways to further improve its performance.

As per the previous post, please set up Ngrok and pipe to the locally hosted app through it, or host the app on a demo server of your own. This static URL will enable us to test our app with external tools like GTmetrix and Pingdom Tools.

GTmetrix first test

We went and scanned our website with GTmetrix to see how we can improve it. We see that results, albeit not catastrophically bad, still have room for improvement.

The first tab --- PageSpeed --- contains a list of recommendations by Google. The first item under the PageSpeed tab --- a warning about a consistent URL --- pertains to our application outputting the images randomly, so that is an item we will skip. The next thing we can do something about is browser caching.

Browser Caching

Browser caching

We see that there is a main.css file that needs its Expires headers set, and the images in the gallery need the same thing. Now, the first idea for these static files would be to set this in our Nginx configuration:

location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
    expires 14d;
}

We can simply put this inside our server block and leave it to Nginx, right?

Well, not really. This will take care of our static files, like CSS, but the /raw images we are being warned about aren't really that static. So this snippet in our Nginx configuration won't exactly fix this issue so easily. For our images, we have an actual controller that creates these on the fly, so it would be ideal if we could set our response headers right there, in the controller. For some reason, these weren't being set properly by Glide.

Maybe we could set our Nginx directive in a way to include the raw resources, but we felt the controller approach to be more future-proof. This is because we aren't sure what other content may end up with an raw suffix eventually --- maybe some videos, or even audio files.

So, we opened /src/ImageController.php in our image gallery app, and dropped these two lines inside of our serveImageAction(), just before the line return $response:

// cache for 2 weeks
$response->setSharedMaxAge(1209600);
// (optional) set a custom Cache-Control directive
$response->headers->addCacheControlDirective('must-revalidate', true);

This will modify our dynamic image responses by adding the proper Cache Control and Expires headers.

Symfony has more comprehensive options for the caching of responses, as documented here.

Having restarted Nginx, we re-tested our app in GTmetrix, and lo and behold:

Browser Caching

The post Improving Performance Perception with Pingdom and GTmetrix appeared first on SitePoint.

Link
Nomad PHPBuild of a Modern ORM enabled Apigility API (22.6.2018, 04:05 UTC)

September - US
Presented By

Tom Anderson
September 20, 2018
20:00 CDT

The post Build of a Modern ORM enabled Apigility API appeared first on Nomad PHP.

Link
SitePoint PHPMySQL Performance Boosting with Indexes and Explain (21.6.2018, 18:00 UTC)

Techniques to improve application performance can come from a lot of different places, but normally the first thing we look at --- the most common bottleneck --- is the database. Can it be improved? How can we measure and understand what needs and can be improved?

One very simple yet very useful tool is query profiling. Enabling profiling is a simple way to get a more accurate time estimate of running a query. This is a two-step process. First, we have to enable profiling. Then, we call show profiles to actually get the query running time.

Let's imagine we have the following insert in our database (and let's assume User 1 and Gallery 1 are already created):

INSERT INTO `homestead`.`images` (`id`, `gallery_id`, `original_filename`, `filename`, `description`) VALUES
(1, 1, 'me.jpg', 'me.jpg', 'A photo of me walking down the street'),
(2, 1, 'dog.jpg', 'dog.jpg', 'A photo of my dog on the street'),
(3, 1, 'cat.jpg', 'cat.jpg', 'A photo of my cat walking down the street'),
(4, 1, 'purr.jpg', 'purr.jpg', 'A photo of my cat purring');

Obviously, this amount of data will not cause any trouble, but let's use it to do a simple profile. Let's consider the following query:

SELECT * FROM `homestead`.`images` AS i
WHERE i.description LIKE '%street%';

This query is a good example of one that can become problematic in the future if we get a lot of photo entries.

To get an accurate running time on this query, we would use the following SQL:

set profiling = 1;
SELECT * FROM `homestead`.`images` AS i
WHERE i.description LIKE '%street%';
show profiles;

The result would look like the following:

Query_Id Duration Query
1 0.00016950 SHOW WARNINGS
2 0.00039200 SELECT * FROM homestead.images AS i \nWHERE i.description LIKE \'%street%\'\nLIMIT 0, 1000
3 0.00037600 SHOW KEYS FROM homestead.images
4 0.00034625 SHOW DATABASES LIKE \'homestead\
5 0.00027600 SHOW TABLES FROM homestead LIKE \'images\'
6 0.00024950 SELECT * FROM homestead.images WHERE 0=1
7 0.00104300 SHOW FULL COLUMNS FROM homestead.images LIKE \'id\'

As we can see, the show profiles; command gives us times not only for the original query but also for all the other queries that are made. This way we can accurately profile our queries.

But how can we actually improve them?

We can either rely on our knowledge of SQL and improvise, or we can rely on the MySQL explain command and improve our query performance based on actual information.

Explain is used to obtain a query execution plan, or how MySQL will execute our query. It works with SELECT, DELETE, INSERT, REPLACE, and UPDATE statements, and it displays information from the optimizer about the statement execution plan. The official documentation does a pretty good job of describing how explain can help us:

With the help of EXPLAIN, you can see where you should add indexes to tables so that the statement executes faster by using indexes to find rows. You can also use EXPLAIN to check whether the optimizer joins the tables in an optimal order.

The post MySQL Performance Boosting with Indexes and Explain appeared first on SitePoint.

Link
PHP: Hypertext PreprocessorPHP 7.3.0 alpha 2 Released (21.6.2018, 00:00 UTC)
The PHP team is glad to announce the release of the second PHP 7.3.0 version, PHP 7.3.0 Alpha 2. The rough outline of the PHP 7.3 release cycle is specified in the PHP Wiki. For source downloads of PHP 7.3.0 Alpha 2 please visit the download page. Windows sources and binaries can be found on windows.php.net/qa/. Please carefully test this version and report any issues found in the bug reporting system. THIS IS A DEVELOPMENT PREVIEW - DO NOT USE IT IN PRODUCTION! For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release would be Alpha 3, planned for July 5. The signatures for the release can be found in the manifest or on the QA site. Thank you for helping us make PHP better.
Link
PHP: Hypertext PreprocessorPHP 7.2.7 Released (21.6.2018, 00:00 UTC)
PHP 7.2.7 Release AnnouncementThe PHP development team announces the immediate availability of PHP 7.2.7. This is a primarily a bugfix release which includes a segfault fix for opcache.PHP 7.2 users are encouraged to upgrade to this version.For source downloads of PHP 7.2.7 please visit our downloads page, Windows source and binaries can be found on windows.php.net/download/. The list of changes is recorded in the ChangeLog.
Link
SitePoint PHPPHP-level Performance Optimization with Blackfire (20.6.2018, 18:00 UTC)

Throughout the past few months, we've introduced Blackfire and ways in which it can be used to detect application performance bottlenecks. In this post, we'll apply it to our freshly started project to try and find the low-points and low-hanging fruit which we can pick to improve our app's performance.

If you're using Homestead Improved (and you should be), Blackfire is already installed. Blackfire should only ever be installed in development, not in production, so it's fine to only have it there.

Note: Blackfire can be installed in production, as it doesn't really trigger for users unless they manually initiate it with the installed Blackfire extension. However, it's worth noting that defining profile triggers on certain actions or users that don't need the extension will incur a performance penalty for the end user. When Blackfire-testing live, make the test sessions short and effective, and avoid doing so under heavy load.

While it's useful to be introduced to Blackfire before diving into this, applying the steps in this post won't require any prior knowledge; we'll start from zero.

Setup

The following are useful terms when evaluating graphs produced by Blackfire.

  • Reference Profile: We usually need to run our first profile as a reference profile. This profile will be the performance baseline of our application. We can compare any profile with the reference, to measure the performance achievements.

  • Exclusive Time: The amount of time spent on a function/method to be executed, without considering the time spent for its external calls.

  • Inclusive Time: The total time spent to execute a function including all the external calls.

  • Hot Paths: Hot Paths are the parts of our application that were most active during the profile. These could be the parts that consumed more memory or took more CPU time.

The first step is registering for an account at Blackfire. The account page will have the tokens and IDs which need to be placed into Homestead.yaml after cloning the project. There's a placeholder for all those values at the bottom:

# blackfire:
#     - id: foo
#       token: bar
#       client-id: foo
#       client-token: bar

After uncommenting the rows and replacing the values, we need to install the Chrome companion.

The Chrome companion is useful only when needing to trigger profiling manually --- which will be the majority of your use cases. There are other integrations available as well, a full list of which can be found here.

Optimization with Blackfire

We'll test the home page: the landing page is arguably the most important part of any website, and if that takes too long to load, we're guaranteed to lose our visitors. They'll be gone before Google Analytics can kick in to register the bounce! We could test pages on which users add images, but read-only performance is far more important than write performance, so we'll focus on the former.

This version of the app loads all the galleries and sorts them by age.

Testing is simple. We open the page we want to benchmark, click the extension's button in the browser, and select "Profile!".

Here's the resulting graph:

In fact, we can see here that the execution time inclusive to exclusive is 100% on the PDO execution. Specifically, this means that the whole dark pink part is spent inside this function and that this function in particular is not waiting for any other function. This is the function being waited on. Other method calls might have light pink bars far bigger than PDO's, but those light pink parts are a sum of all the smaller light pink parts of depending functions, which means that looked at individually, those functions aren't the problem. The dark ones need to be handled first; they are the priority.

Also, switching to RAM mode reveals that while the whole call used almost a whopping 40MB of RAM, the vast majority is in the Twig rendering, which makes sense: it is showing a lot of data, after all.

Truncated by Planet PHP, read more at the original (another 4275 bytes)

Link
Rob AllenDependency Injection with OpenWhisk PHP (20.6.2018, 10:02 UTC)

Any non-trivial PHP applications use various components to do its work, from PDO though to classes from Packagist. It's fairly common in a standard PHP application to use Dependency Injection to configure and load these classes when necessary. How do we do this in a serverless environment such as OpenWhisk?

This question comes up because we do not have a single entry point into our application, instead we have one entry point per action. If we're using Serverless to write an API, then we probably have a set of actions for reading, creating, updating and deleting resources all within the same project.

Consider a project that uses PDO to communicate with the database. The PDO object will need to be instantiated with a DSN string containing the host name, database, credentials etc. Its likely that these will be stored in the parameters array that's passed to the action and set up either as package parameters or service bindings if you're using IBM Cloud Functions.

For this example, I have an ElephantSQL database set up in IBM 's OpenWhisk service and I used Lorna Mitchell's rather helpful Bind Services to OpenWhisk Packages article to make the credentials available to my OpenWhisk actions.

Using a PDO instance within an action

Consider this action which return a list of todo items from the database. Firstly we instantiate and configure the PDO instance and then create a mapper object that can fetch the todo items:

function main(array $args) : array
{
    if (!isset($args['__bx_creds']['elephantsql']['uri'])) {
        throw new Exception("ElephantSQL instance has not been bound");
    }
    $credentials = parse_url($args['__bx_creds']['elephantsql']['uri']);

    $host = $credentials['host'];
    $port = $credentials['port'];
    $dbName = trim($credentials['path'], '/');
    $user = $credentials['user'];
    $password = $credentials['pass'];

    $dsn = "pgsql:host=$host;port=$port;dbname=$dbName;user=$user;password=$password";

    $pdo = new PDO($dsn);
    $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

    // now we can use $pdo to interact with our PostgreSQL database via a mapper
    $mapper = new TodoMapper($pdo);
    $todos = $mapper->fetchAll();

    return [
        'statusCode' => 200,
        'body' => $todos,
    ];
}

That's quite a lot of set up code that clearly doesn't belong here, especially as we need to do the same thing in every action in the project that connects to the database. We are also going to probably put our database access code in a mapper class that takes the PDO instance as a dependency, so to my mind, it makes sense to use a DI container in our project.

I chose to use the Pimple DI container, because it's nice, simple and fast.

To use it, I extended Pimple\Container and added my factory to the constructor:

<?php
namespace App;

use InvalidArgumentException;
use PDO;
use Pimple\Container;

class AppContainer extends Container
{
    public function __construct(array $args)
    {
        if (!isset($args['__bx_creds']['elephantsql']['uri'])) {
            throw new InvalidArgumentException("ElephantSQL instance has not been bound");
        }
        $credentials = parse_url($args['__bx_creds']['elephantsql']['uri']);

        /**
         * Factory to create a PDO instance
         */
        $configuration[PDO::class] = function (Container $c) use ($credentials) {
            $host = $credentials['host'];
            $port = $credentials['port'];
            $dbName = trim($credentials['path'], '/');
            $user = $credentials['user'];
            $password = $credentials['pass'];

            $dsn = "pgsql:host=$host;port=$port;dbname=$dbName;user=$user;password=$password";

            $pdo = new PDO($dsn);
            $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
            return $pdo;
        };

        /**
         * Factory to create a TodoMapper instance
         */
        $configuration[TodoMapper::class] = function (Container $c) : TodoMapper {
            return new TodoMapper($c[PDO::class]);
        };

        parent::__construct($configuration);
    }

}

In this code, we create two factories: one to create the PDO instance and one to create a mapper class called TodoMapper. The TodoMapper class composes a PDO instance via its constructor, so in the factory for the TodoMapper, we retrieve the PDO instance from the container.

This nice thing about doing it this way is that if an action doesn't use a TodoMapper, then the connection to the

Truncated by Planet PHP, read more at the original (another 906 bytes)

Link
SitePoint PHPBuilding an Image Gallery Blog with Symfony Flex: Data Testing (19.6.2018, 18:00 UTC)

In the previous article, we demonstrated how to set up a Symfony project from scratch with Flex, and how to create a simple set of fixtures and get the project up and running.

The next step on our journey is to populate the database with a somewhat realistic amount of data to test application performance.

Note: if you did the “Getting started with the app” step in the previous post, you've already followed the steps outlined in this post. If that's the case, use this post as an explainer on how it was done.

As a bonus, we'll demonstrate how to set up a simple PHPUnit test suite with basic smoke tests.

More Fake Data

Once your entities are polished, and you've had your "That's it! I'm done!" moment, it's a perfect time to create a more significant dataset that can be used for further testing and preparing the app for production.

Simple fixtures like the ones we created in the previous article are great for the development phase, where loading ~30 entities is done quickly, and it can often be repeated while changing the DB schema.

Testing app performance, simulating real-world traffic and detecting bottlenecks requires bigger datasets (i.e. a larger amount of database entries and image files for this project). Generating thousands of entries takes some time (and computer resources), so we want to do it only once.

We could try increasing the COUNT constant in our fixture classes and seeing what will happen:

// src/DataFixtures/ORM/LoadUsersData.php
class LoadUsersData extends AbstractFixture implements ContainerAwareInterface, OrderedFixtureInterface
{
    const COUNT = 500;
    ...
}

// src/DataFixtures/ORM/LoadGalleriesData.php
class LoadGalleriesData extends AbstractFixture implements ContainerAwareInterface, OrderedFixtureInterface
{
    const COUNT = 1000;
    ...
}

Now, if we run bin/refreshDb.sh, after some time we'll probably get a not-so-nice message like PHP Fatal error: Allowed memory size of N bytes exhausted.

Apart from slow execution, every error would result in an empty database because EntityManager is flushed only at the very end of the fixture class. Additionally, Faker is downloading a random image for every gallery entry. For 1,000 galleries with 5 to 10 images per gallery that would be 5,000 - 10,000 downloads, which is really slow.

There are excellent resources on optimizing Doctrine and Symfony for batch processing, and we're going to use some of these tips to optimize fixtures loading.

First, we'll define a batch size of 100 galleries. After every batch, we'll flush and clear the EntityManager (i.e., detach persisted entities) and tell the garbage collector to do its job.

To track progress, let's print out some meta information (batch identifier and memory usage).

Note: After calling $manager->clear(), all persisted entities are now unmanaged. The entity manager doesn't know about them anymore, and you'll probably get an "entity-not-persisted" error.

The key is to merge the entity back to the manager $entity = $manager->merge($entity);

Without the optimization, memory usage is increasing while running a LoadGalleriesData fixture class:

> loading [200] App\DataFixtures\ORM\LoadGalleriesData
100 Memory usage (currently) 24MB / (max) 24MB
200 Memory usage (currently) 26MB / (max) 26MB
300 Memory usage (currently) 28MB / (max) 28MB
400 Memory usage (currently) 30MB / (max) 30MB
500 Memory usage (currently) 32MB / (max) 32MB
600 Memory usage (currently) 34MB / (max) 34MB
700 Memory usage (currently) 36MB / (max) 36MB
800 Memory usage (currently) 38MB / (max) 38MB
900 Memory usage (currently) 40MB / (max) 40MB
1000 Memory usage (currently) 42MB / (max) 42MB

Memory usage starts at 24 MB and increases for 2 MB for every batch (100 galleries). If we tried to load 100,000 galleries, we'd need 24 MB + 999 (999 batches of 100 galleries, 99,900 galleries) * 2 MB = ~2 GB of memory.

After adding $manager->flush() and gc_collect_cycles() for every batch, removing SQL logging with $manager->getConnection()->get

Truncated by Planet PHP, read more at the original (another 5400 bytes)

Link
Evert PotKetting 2.0 release (19.6.2018, 16:00 UTC)

Being fresh out of a job, I had some time to work on a new release of the Ketting library.

The Ketting library is meant to be a generic HATEOAS library for Javascript using a simple, modern API. Currently it only supports the HAL and HTML formats, but I’m curious what other formats folks are interested to see support for.

An example:

const ketting = new Ketting('https://example.org/bookmark');
const author = await ketting.follow('author');
console.log(await author.get());

For the 2.0 release, the biggest change I’ve made is that everything is now converted to TypeScript. TypeScript is so great, I can’t really imagine writing any serious javascript without it anymore.

Most of the sources are also upgraded to use modern javascript features, such as async/await, for...of loops and const/let instead of var.

A small bonus feature is the addition of the .patch() method on resoures, which provides a pretty rudimentary shortcut to doing PATCH request. I kept it extremely basic, because I wanted to figure out first how users like to use this feature first before over-engineering it.

Interested? Go check out the project and documentation on Github, or go download it off npmjs.com.

Link
Matthias NobackDoctrine ORM and DDD aggregates (19.6.2018, 07:00 UTC)

I'd like to start this article with a quote from Ross Tuck's article "Persisting Value Objects in Doctrine". He describes different ways of persisting value objects when using Doctrine ORM. At the end of the page he gives us the following option - the "nuclear" one:

[...] Doctrine is great for the vast majority of applications but if you’ve got edge cases that are making your entity code messy, don’t be afraid to toss Doctrine out. Setup an interface for your repositories and create an alternate implementation where you do the querying or mapping by hand. It might be a PITA but it might also be less frustration in the long run.

As I discovered recently, you don't need an edge case to drop Doctrine ORM altogether. But since there are lots of projects using Doctrine ORM, with developers working on them who would like to apply DDD patterns to it, I realized there is probably an audience for a few practical suggestions on storing aggregates (entities and value objects) with Doctrine ORM.

Designing without the ORM in mind

When you (re)learn how to design domain objects using Domain-Driven Design patterns, you first need to get rid of the idea that the objects you're designing are ever going to be persisted. It's important to stay real about your domain model though; its state definitely needs to be persisted some day, or else the application won't meet its acceptance criteria. But while designing, you should not let the fact that you're using a relational database get in the way. Design the objects in such a way that they are useful, that you can do meaningful things with them, and that they are trustworthy; you should never encounter incomplete or inconsistent domain objects.

Still, at some point you're going to have to consider how to store the state of your domain objects (after all, your application at one point is going to shut down and when it comes up, it needs to have access to the same data as before it was restarted). I find that, when designing aggregates, it would be best to act as if they are going to be stored in a document database. The aggregate and all of its parts wouldn't need to be distributed across several tables in a relational database; the aggregate could just be persisted as one whole thing, filed under the ID of the aggregate's root entity.

More common however is the choice for a relational database, and in most projects such a database comes with an ORM. So then, after you've carefully designed your aggregate "the right way", the question is: how do we store this thing in our tables? A common solution is to dissect the aggregate along the lines of its root entity and optionally its child entities. Consider an example from a recent workshop: we have a purchase order and this order has a number of lines. The PurchaseOrder is the root entity of the aggregate with the same name. The Line objects are the child entities (i.e. they have an identity - a line number - which is only unique within the aggregate itself). PurchaseOrder and Line all have value objects describing parts or aspects of these entities (i.e. the product ID that was ordered, the quantity that was ordered, the supplier from whom it was ordered, and so on). This would be a simplified version of PurchaseOrder and Line:

<?php

final class PurchaseOrder
{
    /**
     * @var PurchaseOrderId
     */
    private $id;

    /**
     * @var SupplierId
     */
    private $supplierId;

    /**
     * @var Line[]
     */
    private $lines = [];

    private function __construct(
        PurchaseOrderId $purchaseOrderId,
        SupplierId $supplierId
    ) {
        $this->id = $purchaseOrderId;
        $this->supplierId = $supplierId;
    }

    public static function create(
        PurchaseOrderId $purchaseOrderId,
        SupplierId $supplierId
    ): PurchaseOrder
    {
        return new self($purchaseOrderId, $supplierId);
    }

    public function addLine(
        ProductId $productId,
        OrderedQuantity $quantity
    ): void
    {
        $lineNumber = count($this->lines) + 1;

        $this->lines[] = new Line($lineNumber, $productId, $quantity);
    }

    public function purchaseOrderId(): PurchaseOrderId
    {
        return $this->id;
    }

    // ...
}

final class Line
{
    /**
     * @var int
     */
    private $lineNumber;

    /**
     * @var ProductId
     */
    private $productId;

    /**
     * @var OrderedQuantity
     */
    private $quantity;

    public function __construct(
        int $lineNumber,
        ProductId $productId,
 

Truncated by Planet PHP, read more at the original (another 13128 bytes)

Link
LinksRSS 0.92   RDF 1.
Atom Feed   100% Popoon
PHP5 powered   PEAR
ButtonsPlanet PHP   Planet PHP
Planet PHP