Rob AllenChanging an SQL Server primary key in Doctrine Migrations (20.11.2019, 11:00 UTC)

I recently came across a rather weird quirk when trying to change a primary key in Sql Server using Doctrine Migrations: you need to use two migrations to get it to work.

This is incredibly minor and I'm only writing it up as it confused me for a while so I thought that I'd document so that I'll find this article if I run across it again in the future!

This is the migration:

final class Version20191023125629 extends AbstractMigration
{
    public function up(Schema $schema) : void
    {
        $table = $schema->getTable('page_category');
        $table->dropPrimaryKey();
        $table->setPrimaryKey(['page_uuid', 'category_id']);
    }

    public function down(Schema $schema) : void
    {
        $table = $schema->getTable('page_category');
        $table->dropPrimaryKey();
        $table->setPrimaryKey(['page_id', 'category_id']);
    }
}

When you run it with SQL Server, you get this error:

++ migrating 20191023125629

     -> IF EXISTS (SELECT * FROM sysobjects WHERE name = '[primary]')
    ALTER TABLE page_category DROP CONSTRAINT [primary]
ELSE
    DROP INDEX [primary] ON page_category
Migration 20191023125629 failed during Execution. Error An exception occurred while executing 'IF EXISTS (SELECT * FROM sysobjects WHERE name = '[primary]')
    ALTER TABLE page_category DROP CONSTRAINT [primary]
ELSE
    DROP INDEX [primary] ON page_category':

SQLSTATE[42S02]: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Cannot drop the index 'page_category.primary', because it does not exist or you do not have permission.

The actual problem is that the primary key name of [primary] is incorrect. Something somewhere is losing the name of the current primary key ([PK__page_cat__E48D0CA0589C25F3]) because there's a setPrimaryKey() call in the same migration.

If we split into two migrations:

final class Version20191023125629 extends AbstractMigration
{
    public function up(Schema $schema) : void
    {
        $table = $schema->getTable('page_category');
        $table->dropPrimaryKey();
    }

    public function down(Schema $schema) : void
    {
        $table = $schema->getTable('page_category');
        $table->setPrimaryKey(['page_id', 'category_id']);
    }
}

and

final class Version20191023125630 extends AbstractMigration
{
    public function up(Schema $schema) : void
    {
        $table = $schema->getTable('page_category');
        $table->setPrimaryKey(['page_uuid', 'category_id']);
    }

    public function down(Schema $schema) : void
    {
        $table = $schema->getTable('page_category');
        $table->dropPrimaryKey();
    }
}

Then it works as expected:

++ migrating 20191023125629

     -> IF EXISTS (SELECT * FROM sysobjects WHERE name = 'PK__page_cat__E48D0CA0589C25F3')
    ALTER TABLE page_category DROP CONSTRAINT PK__page_cat__E48D0CA0589C25F3
ELSE
    DROP INDEX PK__page_cat__E48D0CA0589C25F3 ON page_category

  ++ migrated (0.77s)

  ++ migrating 20191023125630

     -> ALTER TABLE page_category ADD PRIMARY KEY (page_uuid, choice_key_name)

  ++ migrated (0.77s)

As you can see, it's hardly a big problem to create two migrations to work around this and I've reported it to the project as issue 3736.

Link
Matthias NobackImprovements in personal website deployment (20.11.2019, 10:40 UTC)

I wanted to be able to deploy MailComments to my Digital Ocean droplet (VPS) easily and without thinking. Due to a lack of maintenance, some more "operations" work had piled up as well:

  • The Digital Ocean monitoring agent had to be upgraded, but apt didn't have enough memory to do that on this old, small droplet.
  • The Ubuntu version running on that droplet was also a bit old by now.
  • The easiest thing to do was to just create a new droplet and prepare it for deploying my personal websites.
  • Unfortunately, my DNS setup was completely tied to the IP address of the droplet, so I couldn't really create a new droplet, and quickly switch. I'd have to wait for the new DNS information to propagate.

These issues were in the way of progress, so I decided to take some more time to rearrange things.

First: I created a droplet in a Digital Ocean region that supports both floating IPs and volumes (more about that later). Then I added a floating IP to the existing droplet. A floating IP means that you can use a single IP address for all incoming traffic, but you can dynamically assign this IP address to any droplet in the same region. This means you can set up a new droplet, and when it's ready, assign the floating IP address to the new droplet, then safely destroy the old droplet without losing any traffick.

Then I started working on that new droplet, setting it up the way I wanted. This was my shopping list:

  • A newer version of Ubuntu (it didn't have to be Ubuntu, but I don't have experience with any of the other distributions)
  • Docker
  • Nothing else really...

If that's your shopping list, it's easy to create a new droplet using Docker Machine. It has a driver for Digital Ocean. I enabled monitoring, used the standard Droplet size, and that was it. The advantage being: you don't need to set up a root password or anything. In fact, you can't just log in to the server; you'll always use an SSH key for it.

This is a nice way of keeping you from going into your server and performing all kinds of manual setup steps that you could never reproduce in a script, making you become too attached to this particular server. And this makes you afraid of destroying it and starting all over.

Here is the script I created for provisioning a new droplet:

#!/usr/bin/env bash

# Stop at first error; stop at undefined variable
set -eu

# On the local development machine:

# Read environment variables from .env
source .env

DIGITALOCEAN_REGION="${DIGITALOCEAN_REGION-ams3}"
DIGITALOCEAN_ACCESS_TOKEN="${DIGITALOCEAN_ACCESS_TOKEN}"
DIGITALOCEAN_SSH_KEY_FINGERPRINT=${DIGITALOCEAN_SSH_KEY_FINGERPRINT}

NEW_MACHINE_UUID=$(uuidgen)

docker-machine create --driver digitalocean \
        --digitalocean-access-token="${DIGITALOCEAN_ACCESS_TOKEN}" \
        --digitalocean-ssh-key-fingerprint="${DIGITALOCEAN_SSH_KEY_FINGERPRINT}" \
        --digitalocean-image=ubuntu-18-04-x64 \
        --digitalocean-region="${DIGITALOCEAN_REGION}" \
        --digitalocean-size=s-1vcpu-1gb \
        --digitalocean-monitoring=true \
        "${NEW_MACHINE_UUID}"
echo "${NEW_MACHINE_UUID}" > machine_id

Note: the .env file that's loaded by source looks a bit different from your average .env file:

export DIGITALOCEAN_ACCESS_TOKEN=...
export DIGITALOCEAN_SSH_KEY_FINGERPRINT=...

Traefik

On the old droplet I used jwilder/nginx-proxy since it had a nice setup for automatically creating certificates to support a secure connection. I was aware of Traefik for some time, and thought it would be a great replacement for the nginx-proxy. It turned out to be a bit hard to figure some things out, but a couple of hours later I managed to configure it properly.

I wanted to be able to launch any Docker container inside the traefik Docker network and let Traefik recognize it automatically as a service it should route traffic to. Traefik is built for this, and you just have to add a couple of labels to the container definitions:

services:
  matthiasnoback_nl:
    # all the usual things
    # ...
    networks:
      - traefik
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.matthiasnoback_nl_https.rule=Host(`matthiasnoback.nl`)"
      - "traefik.http.routers.matthiasnoback_nl_https.entrypoints=websecure"
      - "traefik.http.routers.matthiasnoback_nl_https.tls.certresolver=myhttpchallenge

Truncated by Planet PHP, read more at the original (another 8270 bytes)

Link
Voices of the ElePHPantInterview with Chris Riley (19.11.2019, 12:30 UTC) Link
Matthias NobackIntroducing MailComments (19.11.2019, 09:30 UTC)

Many people use Disqus as a commenting system for their (static) blog. It's a free service, easy to get started with, and it has everything you'd expect from a commenting system. It's free up to the point where Disqus decides to show advertisements alongside the comments, and these advertisements are so bad, you will look for better options very quickly. The way out of advertisements, of course, is to start paying for their services. Actually, I would have paid already for their service, if only they would have reduced the tremendous amount of stuff they load on your page. And the cookies they need for tracking you across the internet of course.

So, even though I did start paying for Disqus to at least get rid of the horrible ads, I added a little card on my Trello board saying: "Replace Disqus". This turned out to be a nice side project, and after spending many hours on it, it's now ready for production. In fact, there has been a silent launch, but nobody has been using it so far. Let's see if that changes after today.

The commenting system that now finally replaces Disqus is called MailComments. I'm planning to make the software more widely available in a couple of months, but I first need some feedback from running it in production before releasing it in any way.

About MailComments

The idea is rather simple, but of course, there are many implementation details to be considered. In its essence, MailComments allows you to write comments by email. Below every post there is a mailto link. When you click on it, you can compose a comment in your own mail client. After sending it, it will arrive in a dedicated mailbox. MailComments reads new emails in that mailbox, and starts processing them. After a short while, the message is converted into an HTML snippet that can be included in the blog's own HTML page.

Security

When you open up a mailbox and allow people to use it to post messages directly to your website, of course you're asking for trouble. That's why I've added a few security measures:

  • HTML emails will be cleaned first. Any imaginable thing that I wouldn't want on my website will be removed (e.g. <script> tags, and much more).
  • Messages won't be processed until I have marked them as "seen" and I didn't delete them within x seconds.

Architecture

I'm particularly proud of the design and development process behind this piece of software. At its core is a domain model for posts and comments. It has ports for creating posts, adding comments to posts, replying to comments, deleting comments, and deleting posts. The adapters for these ports use the ddboer/imap package to connect to the mailbox using IMAP. They take incoming email messages and convert them to commands that can be processed by the application services.

The test suite consists of unit tests for the domain model (PHPUnit), acceptance tests for the application layer (Behat), integration tests for the port adapters (PHPUnit), and system tests (Behat) that show that everything works well together, using Greenmail which is an SMTP and IMAP server you can use for local testing.

The application consists of two main components: a long-running process that checks the mailbox and processes incoming messages, and a client package which the blog maintainer can use to run special commands via email. For instance: you can export your Disqus comments as XML and import them by sending an email to the MailComments system itself. It will take the XML file and import the comments. There is also a Sculpin plugin (this blog runs on Sculpin), which registers new posts with the MailComments system (also by sending an email to it).

More information

You can find some more information about mail comments on its website, mailcomments.com (which for now redirects to a page on this blog).

Happy commenting everyone! And I can now finally archive that card on my Trello board ;)

Link
PHP: Hypertext PreprocessorPHP 7.4.0RC6 Released! (14.11.2019, 00:00 UTC)
The PHP team is glad to announce the sixth release candidate of PHP 7.4: PHP 7.4.0RC6. This continues the PHP 7.4 release cycle, the rough outline of which is specified in the PHP Wiki. Please DO NOT use this version in production, it is an early test version. For source downloads of PHP 7.4.0RC6 please visit the download page. Please carefully test this version and report any issues found in the bug reporting system. For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release would be 7.4.0, planned for November 28th. The signatures for the release can be found in the manifest or on the QA site. Thank you for helping us make PHP better.
Link
Voices of the ElePHPantInterview with Paul M. Jones (13.11.2019, 20:31 UTC)
Link
Voices of the ElePHPantInterview with Sherri Wheeler (12.11.2019, 15:55 UTC) Link
Voices of the ElePHPantInterview with Sara Golemon and Elizabeth Smith (7.11.2019, 16:05 UTC)
Link
Derick RethansPHP Internals News: Episode 35: Cryptography (7.11.2019, 09:35 UTC)

PHP Internals News: Episode 35: Cryptography

In this episode of "PHP Internals News" I chat with Scott Arciszewski (Website, Twitter, GitHub, Patreon) about the recent PHP-FPM vulnerability and the state of cryptography in PHP.

The RSS feed for this podcast is https://derickrethans.nl/feed-phpinternalsnews.xml, you can download this episode's MP3 file, and it's available on Spotify and iTunes. There is a dedicated website: https://phpinternals.news

Credits

Music: Chipper Doodle v2 — Kevin MacLeod (incompetech.com) — Creative Commons: By Attribution 3.0

Become a Patron!
Link
Rob AllenTesting migrating to Laminas (6.11.2019, 11:00 UTC)

Zend Framework is renaming to Laminas and all the source code is moving to a new GitHub organisation. Implicitly this means a new PHP top level namespace. As you can imagine, this implies that a lot of our code will need to change, so Matthew, Michał and the team have been writing migration tooling to make this easier.

It's now time to test it and they need all the help they can get on real-world codebases, so let's look at how we do that. I have a relatively large Slim Framework application that uses a variety of Zend Framework components including Zend-Authentication, Zend-Acl, Zend-Config, Zend-Form, Zend-InputFilter and Zend-Mail, so maybe it's a good case-study.

Note: The migration-tooling is currently in testing and not ready for production!

Rather helpfully Matthew has written a guide on how to test the Laminas Migration, so we'll follow the instructions.

Step 1: Install laminas-migration

I already have a global Composer set-up and its bin directory is on my path, so I ensured it was up to date and then installed laminas-migration into it:

$ composer global update
$ composer global require laminas/laminas-migration

(As a side-note, I see that the tools I have in my global composer have changed over time as I no long have PHPUnit globally, but have added, changelog-generator)

Step 2: Create a new branch

Don't work on the main line directly, so next we create a branch:

$ git checkout -b migrate-to-laminas
Switched to a new branch 'migrate-to-laminas'

Step 3: Run the migration

Now we can run the migration tool itself:

$ laminas-migration migrate -e docs

The -e option allows you to excluded directories; I don't want my docs directory to be updated.

Interestingly, the tool provides no output on success, but running git status shows that things happened!

$ git status
On branch migrate-to-laminas
Changes not staged for commit:
  (use "git add/rm ..." to update what will be committed)
  (use "git restore ..." to discard changes in working directory)
    ...
    modified:   app/modules/Page/src/AdminPageController.php
    modified:   app/modules/Page/src/EditPageForm.php
    ...
    modified:   composer.json
    deleted:    composer.lock
    modified:   tests/Unit/Page/AdminPageControllerTest.php
    modified:   tests/Unit/Page/EditPageFormTest.php
    ...
    deleted:    lib/Logger/ZendMailHandler.php

Untracked files:
  (use "git add ..." to include in what will be committed)
    lib/Logger/LaminasMailHandler.php

no changes added to commit (use "git add" and/or "git commit -a")

I've removed a lot of lines, but a few things are of interest:

  • All uses of a Zend component are replaced with Laminas

    In my case, this is nearly always the set of use statements at the top.

    For example, an arbitrary git diff shows:

    use User\User;
    -use Zend\Authentication\AuthenticationService;
    -use Zend\Authentication\Result as AuthenticationResult;
    +use Laminas\Authentication\AuthenticationService;
    +use Laminas\Authentication\Result as AuthenticationResult;
    -use Zend\Stdlib\ArrayUtils;
    +use Laminas\Stdlib\ArrayUtils;
  • Your own classes containing Zend are renamed

    I have a class called ZendMailHandler. This was renamed to LaminasMailHandler and hence the filename lib/Logger/ZendMailHandler.php was renamed to lib/Logger/LaminasMailHandler.php.

    Note that function names, variable names, strings and comments which use the wordZend are not changed, so you'll have to do them yourself if you want to.

  • composer.json is updated. vendor and composer.lock are removed

    After updating composer.json, the migration tool has blown away my vendor directory and removed composer.lock, so I'll need to run composer update to get them back.

  • Step 4; Run composer install

    We need to bring our dependencies back. As we're in testing, we need to manually add the Laminas repository to composer.json, but won't need this after the official migration from Zend Framework to Laminas.

Truncated by Planet PHP, read more at the original (another 1593 bytes)

Link
LinksRSS 0.92   RDF 1.
Atom Feed   100% Popoon
PHP5 powered   PEAR
ButtonsPlanet PHP   Planet PHP
Planet PHP