Drupal 8 error: “The following reasons prevent the modules from being uninstalled: Fields pending deletion”

When you try and uninstall a module which has a field that you have used, it can throw the following error:

The following reasons prevent the modules from being uninstalled: Fields pending deletion

This is an issue in both Drupal 7 and Drupal 8. This is due to the fact that drupal doesn’t actually delete the data for the field when you delete the field. It deletes the data during cron runs. If cron hasn’t been run enough times since you deleted the field, drupal won’t let you uninstall the module.

To force drupal to purge the data, you can run the following command

drush php-eval  'field_purge_batch(500);'

Increase 500 to a high enough number to wipe out the data. Afte this has completed, you should be able to uninstall the module

Module uninstall dependencies (drupal stackexchange)
Message “Required by Drupal (Fields Pending Deletion)” baffles users
Can’t uninstall YAML because of following reason: Fields pending deletion

Drupal 7 + Services: Paging & Filtering the index endpoint

There are a lot of ways to manipulate the data returned by the index endpoint. In this post, we are going to consider the node index endpoint. By default, this endpoint returns all nodes sorted in descending order of last update with 20 items per page.

You access the node index endpoint by going to

http://<domain>/<endpoint-path/node.json (or the alias given to node in the resources section)

You can replace .json with with other extensions to get the same data in different formats

To access the second page, you can use the page parameter



To change the number of items on each page, you need the "perform unlimited index" queries permission. You use the pagesize parameter to change it

To filter a field, you can use the parameters[property] where ‘property’ is the field on which you want to filter. It needs to be a field on the node table, and not a drupal field as it does not do the joins to pull in field data.



To apply a different filter than of equality, you can use options[parameters_op][property] where property is the same as above.



To return fewer fields, you can use fields and comma separate the properties. Once again, you can only specify properties on the entity (i.e fields on the base table)



you can sort the results by using options[orderby][property]=<asc|desc>



You can also mix and match these separate options


Understanding ZFS Disk Utilisation and available space

I am hopeful the following will help someone scratch their head a little less in trying to understand the info returned by zfs.

I set up a pool using 4 2TB SATA disks.

$ zpool list -v
rpool 7.25T 2.50T 4.75T - 10% 34% 1.00x ONLINE -
raidz2 7.25T 2.50T 4.75T - 10% 34%
sda2 - - - - - -
sdb2 - - - - - -
sdc2 - - - - - -
sdd2 - - - - - -

The total size displayed here is the total size of the 4 disks. The maths works as 4*2TB = 8TB = ~7.25TiB

RAIDZ2 is like RAID6 and it uses two disks for parity. Thus, I would expect to have ~4TB or 3.63TiB of available space. I haven’t been able to find this number displayed anywhere.

However, you can find the amount of disk space still available using the following command.

$# zfs list
rpool 1.21T 2.19T 140K /rpool
rpool/ROOT 46.5G 2.19T 140K /rpool/ROOT
rpool/ROOT/pve-1 46.5G 2.19T 46.5G /
rpool/data 1.16T 2.19T 140K /rpool/data
rpool/data/vm-100-disk-1  593M 2.19T  593M -
rpool/data/vm-101-disk-1 87.1G 2.19T 87.1G -
rpool/data/vm-102-disk-1 71.2G 2.19T 71.2G -
rpool/data/vm-103-disk-1 2.26G 2.19T 2.26G -
rpool/data/vm-103-disk-2 13.2M 2.19T 13.2M -
rpool/data/vm-103-disk-3 13.2M 2.19T 13.2M -
rpool/data/vm-103-disk-4   93K 2.19T   93K -
rpool/data/vm-103-disk-5 1015G 2.19T 1015G -
rpool/data/vm-104-disk-1 4.73G 2.19T 4.73G -
rpool/data/vm-105-disk-1 4.16G 2.19T 4.16G -
rpool/swap               8.66G 2.19T 8.66G -

The value of 2.19T is the amount of unallocated space available in the pool. To verify this, you can run

# zfs get all rpool
NAME PROPERTY     VALUE                           SOURCE
rpool type                      filesystem                       -
rpool creation              Fri Aug 4 20:39 2017    -
rpool used                     1.21T                                 -
rpool available             2.19T                                -


If we add the two numbers here, 1.21T + 2.19T = 3.4T.

5% of disk space is reserved, so 3.63 * 0.95 = 3.4T

et voila

[Mahout] Deploying custom drivers to mahout

Developing custom drivers on Mahout is fairly straightforward. You can inherit from MahoutDriver for Java drivers and MahourSparkDriver for spark drivers.

The Javadoc for MahoutDriver (if you can find it) provides a good summary of how to implement it

General-purpose driver class for Mahout programs. Utilizes org.apache.hadoop.util.ProgramDriver to run main methods of other classes, but first loads up default properties from a properties file.

To run locally:

$MAHOUT_HOME/bin/mahout run shortJobName [over-ride ops]

Works like this: by default, the file “driver.classes.props” is loaded from the classpath, which defines a mapping between short names like “vectordump” and fully qualified class names. The format of driver.classes.props is like so:

fully.qualified.class.name = shortJobName : descriptive string

The default properties to be applied to the program run is pulled out of, by default, “.props” (also off of the classpath).

The format of the default properties files is as follows:

  i|input = /path/to/my/input
  o|output = /path/to/my/output
  m|jarFile = /path/to/jarFile
  # etc - each line is shortArg|longArg = value

The next argument to the Driver is supposed to be the short name of the class to be run (as defined in the driver.classes.props file).

Then the class which will be run will have it’s main called with

main(new String[] { "--input", "/path/to/my/input", "--output", "/path/to/my/output" });

After all the “default” properties are loaded from the file, any further command-line arguments are taken in, and over-ride the defaults.

So if your driver.classes.props looks like so:

org.apache.mahout.utils.vectors.VectorDumper = vecDump : dump vectors from a sequence file

and you have a file core/src/main/resources/vecDump.props which looks like

  o|output = /tmp/vectorOut
  s|seqFile = /my/vector/sequenceFile

And you execute the command-line:

$MAHOUT_HOME/bin/mahout run vecDump -s /my/otherVector/sequenceFile

Then org.apache.mahout.utils.vectors.VectorDumper.main() will be called with arguments:

You can also implement it slightly differently by just dumping the jar into the mahout home directory and naming it starting with “mahout-” i.e. mahout-mydriver.jar

Hope this helps someone

Setting up Mahout in Linux

A few simple steps to get Mahout running in Linux. This is mostly about the bash script to get it to run easily

You’ll need to install Java first, then download and unpack the mahout distribution.

I then placed it in /usr/local/mahout

To be able to run Mahout from the path, the following bash script was placed in /usr/local/bin

Update the paths as relevant

export MAHOUT_JAVA_HOME=/usr/lib/jvm/java-8-oracle/jre/
export MAHOUT_HOME=/usr/local/mahout

[UE4] Unreal Engine & miniupnp

This post covers how to integrate pnp into an unreal project using miniupnp.

The first step is to clone the project from github

The module that we are interested in is the miniupnpc and in that directory, there is another directory called msvc and this contains the solution file for Visual Studio. Open this and if you have a more recent version of visual studio (which very likely do), it will want to upgrade everything. Let it go through the upgrade process.

Building the project now will most likely fail due to a missing file miniupnpcstrings.h . This file needs to be generated and the way to do that is to run a script in that folder called updateminiupnpstrings.sh. You will most probably need something like cygwin to for this script to work as it a unix shell to work

Once the miniupnpcstrings.h has been generated, we also need to follow some instructions for Unreal Engine for Linking Static Libraries Using The Build System, particularly the section on customizations for targeting UE4 modules.

From the project properties page, choose configuration manager. From the Active Solutions Platform, select new and type in or select x64 and save it. You have to do this for only one of the projects.

Building of the static project will fail since it can’t find the lib which is now in x64\Release as opposed to just Release\. The exe is not required for integrating with Unreal Engine, but if you want to complete the build, just fix the path in Project Properties -> Linker -> Input.

You should choose the release build instead of the debug build and you should now be able to build the solution from visual studio. It did pop up some warnings for me, but the build completed successfully.

The rest of the instructions are from the unreal engine documentation about integrating static libraries, starting from section about Third Party Directory

[UE4] Source Control

As with any software project, it is important to use some form of source control solution. For most software projects, there are a number of good solutions out there. However, in the world of game development, most of these are not viable since they won’t handle binary files very well, and unreal engine (as most games) will have a large amount of binary resources.

Perforce is a good tool for small teams since it is free for teams with less than 20 developers.

Another thing that can be confusing is what files/folders to add into the source control. Generally, we do not want to include any files which can be auto-generated (e.g. builds) or are transient (e.g. logs). You should generally include the following folders into source control

  • Config
  • Content
  • Intermediate/ProjectFiles/<projectname>.vcxproj* (the .vcxproj.user file may not be relevant if there are multiple developers)
  • Source
  • <projectname>.sln
  • <projectname>/.uproject

I found it odd that the project file is in the intermediate folder since one wouldn’t intuitively think to include it in source control