PHP: Hypertext PreprocessorPHP 7.3.0alpha4 Released (19.7.2018, 00:00 UTC)
The PHP team is glad to announce the release of the fourth PHP 7.3.0 version, PHP 7.3.0alpha4. The rough outline of the PHP 7.3 release cycle is specified in the PHP Wiki. For source downloads of PHP 7.3.0alpha4 please visit the download page. Windows sources and binaries can be found on windows.php.net/qa/. Please carefully test this version and report any issues found in the bug reporting system. THIS IS A DEVELOPMENT PREVIEW - DO NOT USE IT IN PRODUCTION! For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release would be Beta 1, planned for August 2nd. The signatures for the release can be found in the manifest or on the QA site. Thank you for helping us make PHP better.
Link
Matthew Weier O'PhinneyNotes on GraphQL (18.7.2018, 22:05 UTC)

The last week has been my first foray into GraphQL, using the GitHub GraphQL API endpoints. I now have OpinionsTM.

The promise is fantastic: query for everything you need, but nothing more. Get it all in one go.

But the reality is somewhat... different.

What I found was that you end up with a lot of garbage data structures that you then, on the client side, need to decipher and massage, unpacking edges, nodes, and whatnot. I ended up having to do almost a dozen array_column, array_map, and array_reduce operations on the returned data to get a structure I can actually use.

The final data I needed looked like this:

[
  {
    "name": "zendframework/zend-expressive",
    "tags": [
      {
        "name": "3.0.2",
        "date": "2018-04-10"
      }
    ]
  }
]

To fetch it, I needed a query like the following:

query showOrganizationInfo(
  $organization:String!
  $cursor:String!
) {
  organization(login:$organization) {
    repositories(first: 100, after: $cursor) {
      pageInfo {
        startCursor
        hasNextPage
        endCursor
      }
      nodes {
        nameWithOwner
        tags:refs(refPrefix: "refs/tags/", first: 100, orderBy:{field:TAG_COMMIT_DATE, direction:DESC}) {
          edges {
            tag: node {
              name
              target {
                ... on Commit {
                  pushedDate
                }
                ... on Tag {
                  tagger {
                    date
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

Which gave me data like the following:

{
  "data": {
    "organization": {
      "repositories: {
        "pageInfo": {
          "startCursor": "...",
          "hasNextPage": true,
          "endCursor": "..."
        },
        "nodes": [
          {
            "nameWithOwner": "zendframework/zend-expressive",
            "tags": {
              "edges": [
                "tag": {
                  "name": "3.0.2",
                  "target": {
                    "tagger": {
                      "date": "2018-04-10"
                    }
                  }
                }
              ]
            }
          }
        ]
      }
    }
  }
}

How did I discover how to create the query? I'd like to say it was by reading the docs. I really would. But these gave me almost zero useful examples, particularly when it came to pagination, ordering results sets, or what those various "nodes" and "edges" bits were, or why they were necessary. (I eventually found the information, but it's still rather opaque as an end-user.)

Additionally, see that pageInfo bit? This brings me to my next point: pagination sucks, particularly if it's not at the top-level. You can only fetch 100 items at a time from any given node in the GitHub GraphQL API, which means pagination. And I have yet to find a client that will detect pagination data in results and auto-follow them. Additionally, the "after" property had to be something valid... but there were no examples of what a valid value would be. I had to resort to StackOverflow to find an example, and I still don't understand why it works.

I get why clients cannot unfurl pagination, as pagination data could appear anywhere in the query. However, it hit me hard, as I thought I had a complete set of data, only to discover around half of it was missing once I finally got the processing correct.

If any items further down the tree also require pagination, you're in for some real headaches, as you then have to fetch paginated sets depth-first.

So, while GraphQL promises fewer round trips and exactly the data you need, my experience so far is:

  • I end up having to be very careful about structuring my queries, paying huge attention to pagination potential, and often sending multiple queries ANYWAYS. A well-documented REST API is often far easier to understand and work with immediately.

  • I end up doing MORE work client-side to make the data I receive back USEFUL. This is because the payload structure is based on the query structure and the various permutations you need in order to get at the data you need. Again, a REST API usually has a single, well-documented payload, making consumption far easier.

I'm sure I'm

Truncated by Planet PHP, read more at the original (another 1995 bytes)

Link
Evert Pot202 Accepted (17.7.2018, 15:00 UTC)

202 Accepted, means that the server accepted the request, but it’s not sure yet if the request will be completed successfully.

The specification calls it ‘intentionally non-committal’. You might see APIs using this response for for example asynchronous batch processing. HTTP doesn’t have a standard way to communicate after a request if a request eventually succeeded. An API using this might use some other facility to later to do this.

For example, it might send an email to a user telling them that the batch process worked, or it might expose another endpoint in the API that can indicates the current status of a long-running process.

Example

POST /my-batch-process HTTP/1.1
Content-Type: application/json

...

HTTP/1.1 202 Accepted
Link: </batch-status/5545> rel="http://example.org/batch-status"
Content-Length: 0

Link
Matthias NobackObjects should be constructed in one go (17.7.2018, 07:50 UTC)

Consider the following rule:

When you create an object, it should be complete, consistent and valid in one go.

It is derived from the more general principle that it should not be possible for an object to exist in an inconsistent state. I think this is a very important rule, one that will gradually lead everyone from the swamps of those dreaded "anemic" domain models. However, the question still remains: what does all of this mean?

Well, for example, we should not be able to construct a Geolocation object with only a latitude:

final class Geolocation
{
    private $latitude;
    private $longitude;

    public function __construct()
    {
    }

    public function setLatitude(float $latitude): void
    {
        $this->latitude = $latitude;
    }

    public function setLongitude(float $longitude): void
    {
        $this->longitude = $longitude;
    }
}

$location = new Geolocation();
// $location is in invalid state!

$location->setLatitude(-20.0);
// $location is still in invalid state!

It shouldn't be possible to leave it in this state. It shouldn't even be possible to construct it with no data in the first place, because having a specific value for latitude and longitude is one of the core aspects of a geolocation. These values belong together, and a geolocation "can't live" without them. Basically, the whole concept of a geolocation would become meaningless if this were possible.

An object usually requires some data to fulfill a meaningful role. But it also poses certain limitations to what kind of data, and which specific subset of all possible values in the universe would be allowed. This is where, as part of the object design phase, you'll start looking for domain invariants. What do we know from the relevant domain that would help us define a meaningful model for the concept of a geolocation? Well, one of these things is that latitude and longitude should be within a certain range of values, i.e. -90 to 90 inclusive and -180 to 180 inclusive, respectively. It would definitely not make sense to allow any other value to be used. It would render all modelled behavior regarding geolocations useless.

Taking all of this into consideration, you may end up with a class that forms a sound model of the geolocation concept:

final class Geolocation
{
    private $latitude;
    private $longitude;

    public function __construct(
        float $latitude,
        float $longitude
    ) {
        Assertion::between($latitude, -90, 90);
        $this->latitude = $latitude;

        Assertion::between($longitude, -180, 180);
        $this->longitude = $longitude
    }
}

$location = new Geolocation(-20.0, 100.0);

This effectively protects geolocation's domain invariants, making it impossible to construct an invalid, incomplete or useless Geolocation object. Whenever you encounter such an object in your application, you can be sure that it's safe to use. No need to use a validator of some sorts to validate it first! This is why that rule about not allowing objects to exist in an inconsistent state is wonderful. My not-to-be-nuanced advice is to apply it everywhere.

An aggregate with child entities

The rule isn't without issues though. For example, I've been struggling to apply it to an aggregate with child entities, in particular, when I was working on modelling a so-called "purchase order". It's used to send to a supplier and ask for some goods (these "goods" are specific quantities of a certain product). The domain expert talks about this as "a header with lines", or "a document with lines". I decided to call the aggregate root "Purchase Order" (a class named PurchaseOrder) and to call the child entities representing the ordered goods "Lines" (in fact, every line is an instance of Line).

An important domain invariant to consider is: "every purchase order has at least one line". After all, it just doesn't make sense for an order to have no lines. When trying to apply this design rule, my first instinct was to provide the list of lines as a constructor argument. A simplified implementation (note that I don't use proper values objects in these examples!) would look like this:

final class PurchaseOrder
{
    private $lines;

    /**
     * @param Line[] $lines
     */
    public function __construct(array $lines)
    {
        Assertion::greaterThan(count($lines), 1,
            'A purchase order should have at least one line');

        $this->lines = $lines;
    }
}

final class Line
{
    private $lineNumber;
    private $productId;
    private $quantity;

    public function __construct(
        int $lineNumber,
        int $productId,
 

Truncated by Planet PHP, read more at the original (another 11698 bytes)

Link
Evert PotBye Disqus, hello Webmention! (16.7.2018, 16:00 UTC)

Since 2013 I’ve used Disqus on this website for comments. Over the years Disqus has been getting ‘fatter’, so I’ve been thinking of switching to something new.

Then on Friday, I saw a tweet which got me inspired:

<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"/>

This links to Nicolas Hoizey’s blog, in which he details moving his Github Pages-based blog from Disqus to Webmentions. I spent all day Saturday to do the same.

What are webmentions?

Webmentions is a W3C standard for distributed commenting. It’s very similar to “Pingbacks”. When somebody wants to respond to an article here from their own blog, a link will be created automatically.

I used the Webmention.io hosted service to do this. To receive webmentions, I just needed to embed the following in the <head> of this site:

<link rel="pingback" href="https://webmention.io/evertpot.com/xmlrpc" />
<link rel="webmention" href="https://webmention.io/evertpot.com/webmention" />

The webmention.io site has a simple API, with open CORS headers. I wrote a custom script to get the webmentions embedded in this blog. source is on github.

Importing old comments

I exported the disqus comments, and wrote a script to convert them into JSON. The source for the exporter is reusable. I also put it on github if anyone finds it useful.

The last time I switch blogging systems I used Habari, but also never got around importing comments. I took the time to import those as well, so now comments all the way from 2006 are back!

Jekyll has a ‘data files’ feature, which allows me to just drop the json file in a _data directory, and with a recursive liquid include I can show comments and threads:

<script src="https://gist.github.com/evert/409f5effca5e7fe706bd1c3aad13af9d.js"/>

Unfortunately Disqus has no means to get an email address, url or avatar from the export, so all Disqus comments are now just show up as a boring name, as can be seen here.

If you ever commented on this site with Disqus, and want to show up with a url and/or avatar find yourself in the comment archive on github and send me a PR, or just tell me!

Getting tweets and likes from twitter

To get mentions from social media, like Twitter, I’m using Bridgy. This is a free service that listens for responses to tweets and converts them to Web mentions.

It also supports other networks, but Twitter is the only one I have setup. To see it in action, you can see a bunch of twitter responses right below this article.

What’s missing?

It’s not easy currently to discover on this site that Webmentions are possible, and it’s it’s not possible to leave a regular comment anymore. I hope I can fix both of these in the future. I think the result is that the barrier to entry has become very high, and I’d like to see if it’s possible for me to reduce that again. How would you go about it?

Webmention.io does not have good spam protection. Spam was a major issue with pingbacks, and is pretty much why pingbacks died. Webmention is not big enough for th

Truncated by Planet PHP, read more at the original (another 539 bytes)

Link
Evert Pot201 Created (10.7.2018, 15:00 UTC)

201 Created, just like 200 OK, means that the request was successful, but it also resulted in a new resource being created.

In the case of a PUT request, it means that a new resource was created on the actual url that was specified in the request.

Example

PUT /new-resource HTTP/1.1
Content-Type: text/html
Host: example.org

...

HTTP/1.1 201 Created
ETag: "foo-bar"

POST requests

If you got a 201 in response to a POST request, it means that a new resource was created at a different endpoint. For those cases, a Location header must be included to indicate where the new resource lives.

In the following example, we’re creating a new resource via POST, and the server responds with the new location and the ETag of the new resource.

POST /collection/add-member HTTP/1.1
Content-Type: application/json
Host: example.org

{ "foo": "bar" }

HTTP/1.1 201 Created
ETag: "gir-zim"
Location: /collection/546

It’s common misconception that POST is generally for creating new resources, and PUT is strictly for updating them. However, the real difference is that PUT should be the preferred method if the client can determine the url of the resource it wants to create.

In practice most servers do want control over the url, perhaps because it’s tied to an auto-incrementing database id.

Link
Paul M. JonesAtlas.Orm 3.0 (“Cassini”) Now Stable (10.7.2018, 14:36 UTC)

I am delighted to announce the immediate availability of Atlas.Orm 3.0 (“Cassini”), the flagship package in the Atlas database framework. Installation is as easy as composer require atlas/orm ~3.0.

Atlas.Orm helps you build and work with a model of your persistence layer (i.e., tables and rows) while providing a path to refactor towards a richer domain model as needed. You can read more about Atlas at the newly-updated project site, and you can find extensive background information in these blog posts:

If you want a data-mapper alternative to Doctrine, especially for your pre-existing table structures, then Atlas is for you!


Read the Reddit commentary on this post here.

Link
Matthias NobackAbout fixtures (10.7.2018, 07:15 UTC)

System and integration tests need database fixtures. These fixtures should be representative and diverse enough to "fake" normal usage of the application, so that the tests using them will catch any issues that might occur once you deploy the application to the production environment. There are many different options for dealing with fixtures; let's explore some of them.

Generate fixtures the natural way

The first option, which I assume not many people are choosing, is to start up the application at the beginning of a test, then navigate to specific pages, submit forms, click buttons, etc. until finally the database has been populated with the right data. At that point, the application is in a useful state and you can continue with the act/when and assert/then phases. (See the recent article "Pickled State" by Robert Martin on the topic of tests as specifications of a finite state machine).

Populating the database like this isn't really the same as loading database fixtures, but these activities could have the same end result. The difference is that the natural way of getting data into the database - using the user interface of the application - leads to top quality data:

  • You don't need to violate the application's natural boundaries by talking directly to the database. You approach the system as a black box, and don't need to leverage your knowledge of its internals to get data into the database.
  • You don't have to maintain these fixtures separately from the application. They will be recreated every time you run the tests.
  • This means that these "fixtures" never become outdated, incomplete, invalid, inconsistent, etc. They are always correct, since they use the application's natural entry points for entering the data in the first place.

However, as you know, the really big disadvantage is that running those tests will become very slow. Creating an account, logging in, activating some settings, filling in some more forms, etc. every time before you can verify anything; that's going to take a lot of time. So honestly, though it would be great; this is not a realistic scenario in most cases. Instead, you might consider something else:

Generate once, reload for every test case

Instead of navigating the application and populating the database one form at a time, for every test case again, you could do it once, and store some kind of snapshot of the resulting data set. Then for the next test case you could quickly load that snapshot and continue with your test work from there.

This approach has all the advantages of the first option, but it will make your test suite run a lot faster. The risk is that the resulting set of fixtures may not be diverse enough to test all the branches of the code that needs to be tested.

With both of these options, you may also end up with a chicken/egg problem. You may need some data to be in the database first, to make it even possible to navigate to the first page where you could start building up the fixtures. Often this problem itself may provide useful feedback about the design of your application:

  • Possibly, you have data in the database that shouldn't be there anyway (e.g. a country codes table that might as well have been a text file, or a list of class constants).
  • Possibly, the data can only end up in the database by manual intervention; something a developer or administrator gets asked to do every now and then. In that case, you could consider implementing a "black box alternative" for it (e.g. a page where you can accomplish the same thing, but with a proper form or button).

If these are not problems you can easily fix, you may consider using several options combined: first, load in some "bootstrap" data with custom SQL queries (see below), then navigate your way across the application to bring it in the right state.

But, there are other options, like:

Insert custom data into the database

If you don't want to or can't naturally build up your fixtures (e.g. because there is no straight-forward way to get it right). you can in fact do several alternative things:

  1. Use a fixture tool that lets you use actually instantiated entities as a source for fixtures, or
  2. Manually write INSERT queries (possibly with the same net result).

Option 1 has proven useful if you use your database as some anonymous storage thing that's used somewhere behind a repository. If you work with an ORM, that is probably the case. Option 2 is the right choice if your database is this holy thing in the

Truncated by Planet PHP, read more at the original (another 3485 bytes)

Link
Pascal LandauHow to setup PHP, PHP-FPM and NGINX on Docker in Windows 10 [Tutorial Part 1] (8.7.2018, 20:11 UTC)

You probably heard from the new kid around the block called "Docker"? You are a PHP developer and would like to get into that, but you didn't have the time to look into it, yet? Then this tutorial is for you! By the end of it, you should know:

  • how to setup Docker "natively" on a Windows 10 machine
  • how to build and run containers from the command line
  • how to log into containers and explore them for information
  • what a Dockerfile is and how to use it
  • how containers can talk to each other
  • how docker-compose can be used to fit everything nicely together

Note: I will not only walk on the happy path during this tutorial. That means I'll deep-dive into some things that are not completely related to docker (e.g. how to find out where the configuration files for php-fpm are located), but that are imho important to understand, because they enable you to solve problems later on your own.

But if you are short on time, you might also jump directly to the tl;dr.

This is the first part of a (probably) multi-part series on Docker. The next part will explain how to set up PHP in Docker containers in order to work nicely with PHPStorm when using XDebug.

Table of contents

Introduction

Preconditions

I'm assuming that you have installed Git bash for Windows. If not, please do that before, see Setting up the software: Git and Git Bash.

Why use Docker?

I won't go into too much detail what Docker is and why you should use it, because others have already talked about this extensively.

As for me, my main reasons were

  • Symlinks in vagrant didn't work the way they should
  • VMs become bloated and hard to manage over time
  • Setup in the team involved a lot of work
  • I wanted to learn Docker for quite some time because you hear a lot about it

In general, Docker is kind of like a virtual machine, so it allows us to develop in an OS of our choice (e.g. Windows) but run the code in the same environment as it will in production (e.g. on a linux server). Thanks to its core principles, it makes the separation of services really easy (e.g. having a dedicated server for your database) which - again - is something that should happen on production anyway.

Transition from Vagrant

On Windows, you can either use the Docker Toolbox (which is essentially a VM with Docker setup on it) or the Hyper-V based Docker for Windows. This tutorial will only look at the latter.

A word of caution: Unfortunately, we cannot have other Gods besides Docker (on Windows). The native Docker client requires Hyper-V to be activated which in turn will cause Virtualbox to not work any longer. Thus, we will not be able to use Vagrant and Docker alongside each other. This was actually the main reason it took me so long to start working with Docker.

Setup Docker

First,

Truncated by Planet PHP, read more at the original (another 59213 bytes)

Link
PHP: Hypertext PreprocessorPHP 7.3.0 alpha 3 Released (5.7.2018, 00:00 UTC)
The PHP team is glad to announce the release of the third PHP 7.3.0 version, PHP 7.3.0 Alpha 3. The rough outline of the PHP 7.3 release cycle is specified in the PHP Wiki. For source downloads of PHP 7.3.0 Alpha 3 please visit the download page. Windows sources and binaries can be found on windows.php.net/qa/. Please carefully test this version and report any issues found in the bug reporting system. THIS IS A DEVELOPMENT PREVIEW - DO NOT USE IT IN PRODUCTION! For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release would be Beta 1, planned for July 19th. The signatures for the release can be found in the manifest or on the QA site. Thank you for helping us make PHP better.
Link
LinksRSS 0.92   RDF 1.
Atom Feed   100% Popoon
PHP5 powered   PEAR
ButtonsPlanet PHP   Planet PHP
Planet PHP