{ "version": "https://jsonfeed.org/version/1", "user_comment": "This feed allows you to read the posts from this site in any feed reader that supports the JSON Feed format. To add this feed to your reader, copy the following URL -- http://admin.davidfindlay.com.au/feed/json/ -- and add it your reader.", "home_page_url": "http://admin.davidfindlay.com.au", "feed_url": "http://admin.davidfindlay.com.au/feed/json/", "title": "David Findlay", "description": "Nerd For Hire", "items": [ { "id": "http://admin.davidfindlay.com.au/how-to-set-up-amazon-s3-for-website-hosting/", "url": "http://admin.davidfindlay.com.au/how-to-set-up-amazon-s3-for-website-hosting/", "title": "How to Set Up Amazon S3 for Website Hosting", "content_html": "

Amazon Web Services S3 is a great way to host static websites. Here’s how to set up Amazon AWS S3 for Website Hosting. 

\n

If you want to run WordPress on Amazon S3, see Serverless WordPress.

\n

This tutorial assumes you’ve already got an Amazon Web Services account.

\n
    \n
  1. Go to Services and search for or select S3.\n

    \"Select
    Select the S3 service
  2. \n
  3. Click Create Bucket.\n

    \"Select
    Select create bucket
  4. \n
  5. Enter a name for your bucket, this must be unique. For instance bobs-cool-hosting.\n

    \"Give
    Give your bucket and a name and select a region
  6. \n
  7. Select a region to host your bucket in. This will be geographically where your files are served from.  It’s best to choose a location close to where your users visit from. Click Next.
  8. \n
  9. The options can be left default, just click Next.\n

    \"Options
    Options can be left default
  10. \n
  11. Permissions set up is important. By default AWS S3 sets the bucket up to be secure and prevent it from being made publicly accessible. This is due to so many people just setting up buckets and accidentally or carelessly making them public, resulting security breaches. We want our bucket to be public because we’re hosting a website, so uncheck all the Public access settings and click Next.\n

    \"Permissions
    Permissions settings
  12. \n
  13. On the Review page you may be warned that this bucket may become public, that’s ok as we said so click Create bucket.\n

    \"Review
    Review page
  14. \n
  15. So we’ve now created our bucket, as you can see here it’s marked “Objects can be public“. Click on the name of the bucket to open it.\n

    \"List
    List of buckets
  16. \n
  17. Click the Properties tab, then click Static website hosting.\n

    \"Select
    Select static website hosting
  18. \n
  19. Click the option Use this bucket to host a website. Take note of the URL at the top, this will be used to access our website. Type in index.html as the index document and error.html as the error document. Click Save.\n

    \"Configure
    Configure static website hosting
  20. \n
  21. If you now go to the URL we saw you’ll see it’s still saying 403 Forbidden. We now need to set up it’s permissions to enable public access.\n

    \"By
    By default access is prevented
  22. \n
  23. Click on the Permissions tab, then Bucket Policy. Copy in the following policy, being sure to change the bucket name in the Resource field from “my-serverless-wp” to match the name of your bucket. Click Save.\n
    {\r\n \"Version\": \"2012-10-17\",\r\n \"Statement\": [\r\n {\r\n \"Sid\": \"PublicReadGetObject\",\r\n \"Effect\": \"Allow\",\r\n \"Principal\": \"*\",\r\n \"Action\": \"s3:GetObject\",\r\n \"Resource\": \"arn:aws:s3:::my-serverless-wp/*\"\r\n }\r\n ]\r\n}
    \n

    \"Set
    Set up bucket policy
  24. \n
  25. Now create a test.html file with just a bit of text in it. On the Overview tab click Upload.\n

    \"Select
    Select Upload
  26. \n
  27. Click Add File and Select the file you created and then click Next.\n\"Select\n\"Uploading\n
  28. \n
  29. Under Manage public permissions select Grant public read access to this object(s). Click Next.\n

    \"Set
    Set object to public
  30. \n
  31. On the Set properties page the standard Storage Class is fine for this, click Next.\n

    \"Default
    Default properties are fine
  32. \n
  33. Click Upload, our file will then be displayed in the list. \n\"Upload\n\"File\n
  34. \n
  35. Go to the bucket url from step 10, enter this in a browser and add at the end “/test.html”. You should see your test.html page displayed.\n

    \"Test
    Test page is now displayed
  36. \n
\n

Your S3 bucket is now ready to serve your website, but you’ll probably want to set up a DNS CNAME to give it a friendly domain name. I’ll explain how to do that in another article.

\n", "date_published": "2019-01-15T03:47:05+00:00", "date_modified": "2019-01-15T05:06:04+00:00", "author": { "name": "david" } }, { "id": "http://admin.davidfindlay.com.au/serverless-wordpress-sort-of/", "url": "http://admin.davidfindlay.com.au/serverless-wordpress-sort-of/", "title": "Serverless WordPress (sort of)", "content_html": "

Here’s how I run my site(davidfindlay.com.au) in a sort of serverless way using Amazon S3. I say it’s sort of serverless because you still need an Apache/MySQL/WordPress installation, but it doesn’t need to be running all the time and can just run on your local computer.

\n

Why host your WordPress site on Amazon S3?

\n

Firstly S3 is very fast. WordPress hosted on LAMP has to bootstrap WordPress, talk to the database, process your request and generate a page before sending it to the browser. This all takes time. It makes sense if you host dynamic content. However if your content doesn’t change much, it doesn’t.

\n

If you update your site maybe once a day, why have the HTML generated every time a visitor hits the site? With Static WordPress hosting you generate the HTML once when you make a change and the generated HTML is then served to each new visitor. This is much faster. 

\n

As mentioned S3 is very fast, but it’s also scalable. If your site suddenly gets visited by 10000 people in an hour, S3 can handle it. Your typical WordPress installation on a LAMP hosting provider probably can’t. 

\n

Secondly static hosting is more secure. Because your WordPress installation is hidden behind a firewall on your local network, you don’t have to worry about security updates and zero-day exploits as much. Sure you still should keep up to date, but because attackers don’t have any access to the PHP pages or database you’re kept much safer. Amazon has good security measures on S3 and as long as you use them, your S3 should be kept safe. 

\n

Assumed Knowledge

\n\n

Step 1: WordPress installation

\n

Firstly install WordPress locally. Perhaps using MAMP or on an Apache/MySQL/PHP installation on a linux box on your local network. How you do this part is up to you. I’ve actually got mine running on a small EC2 micro instance, that I just turn on and off when I want to make changes to my site.

\n

No one will actually visit this WordPress installation, so it can just be local on your machine, not world accessible via the internet. Firewall it off so no one can reach it for maximum safety.

\n

You’ll also need to install the AWS CLI. If you’re using an EC2 instance with an Amazon AMI, you’ll already have this. 

\n

Step 2: Set up an S3 Bucket

\n

You’ll need an Amazon Web Services account first, a free-tier account should be fine for most small sites for at least the first year. Afterwards you may need to pay, but S3 is really cheap.

\n

There’s a lot of steps to setting up an S3 Bucket for web site hosting, so I’ve put them in a separate article here: How to Set Up Amazon S3 for Website Hosting.

\n

Once you’ve got the S3 bucket set up return here.

\n

Step 3: Install Simply Static WordPress Plugin

\n

This is pretty much a standard WordPress plugin install, so I want explain it too much.

\n

The Simply Static plugin automatically generates a plain html version of your site and exports it to a directory on your WordPress host. 

\n

Static means that it’s plain HTML, no PHP. It can run on any sort of hosting without needing a PHP or MySQL installation. 

\n

Once Simply Static is installed, activate it.

\n
    \n
  1. Select Simply Static, then Settings from the left hand menu.
  2. \n
  3. Set Destination URLs to Use Relative URLs.\n

    \"\"
    Simply Static settings
  4. \n
  5. Set Delivery Method to Local Directory.\n

    \"Simply
    Simply Static settings, continued
  6. \n
  7. Set Local Directory to a suitable location, for instance on my linux installation “/var/www/html_static”. Take note of this path as you may need to modify the script in Step 4 to match.
  8. \n
\n

Step 4: Configure AWS IAMs user and AWS CLI

\n

You’ll need an AWS IAM account set up to use the AWS CLI.

\n
    \n
  1. Click Services at the top of the screen and in the search box type IAM. Click on the IAM option that appears in the drop down.
  2. \n
  3. Click Add User.\n

    \"Add
    Add IAM user
  4. \n
  5. Enter a user name such as “s3hosting”. Under Access Type, select Programmatic access. This is required so that the AWS CLI can use the user credentials. Click Next.\n

    \"Set
    Set up programmatic access
  6. \n
  7. Under Set Permissions, select Attach existing policies directly, then search for s3. Select the AmazonS3FullAccess policy. Click Next. Note that this policy means that using this AWS Access Key ID and Secret Key, someone could access any file in any bucket on your AWS account. This can be dangerous! \n

    \"Select
    Select existing policy
  8. \n
  9. Continue through to the review page with default settings. The review page should look like this. Click Create User.\n

    \"Review
    Review and create user
  10. \n
  11. You’ve now created the AWS CLI user. You’ll need the Access key ID and Secret access key displayed on this page for the next part of the process.\n

    \"Note
    Note access key id and secret key
  12. \n
\n

Next move back to your terminal where you’ve installed your WordPress.

\n

Run the AWS Configure command. You’ll need to supply user IAM user Key ID and Secret Key as well as the default region, which should be the region that your S3 bucket is in:

\n
aws configure
\n
\"Configure
Configure AWS CLI
\n

Create the following bash script and call it syncStatic.sh:

\n
#!/bin/bash\r\naws s3 sync /var/www/html_static s3://my-serverless-wp/
\n

Change ‘my-serverless-wp’ to match the name of your bucket and you may need to change ‘/var/www/html_static’ to match the local directory you set in Step 3.

\n

Step 5: Generate Static HTML

\n

In the WordPress Admin pages, select Simply Static from the side menu. Click Generate.

\n

The log will show progress as the static html pages are generated. When the log shows “Done!” move to the next step.

\n

Step 6: Sync to S3

\n

In your terminal, run the syncStatic.sh script. It’ll quickly transfer the files to S3. If you’re running in a small EC2 instance this will be super quick, but a bit slower otherwise.

\n

Step 7: Test the site

\n

Go to your S3 public endpoint URL in your browser. For instance: http://my-serverless-wp.s3-website-ap-southeast-2.amazonaws.com/

\n

You can get your URL from the S3 bucket configuration by going to Services->S3->Select your bucket->Properties->Static Web Hosting

\n
\"Static
Static website hosting url
\n

After clicking on that URL or pasting it in your browser, you should be able to see your WordPress site and browse it.

\n

Step 8: Set up DNS CNAME

\n

Your site is now on the web, but it’s on an ugly Amazon AWS S3 url. You don’t want to direct people to that. 

\n

The next step depends on how you want to host your site. You’ll need to set up CNAME(canonical name) which points your website domain to the the AWS S3 bucket address.

\n

I’ll show how to do this for Amazon Route 53 DNS hosting in another article.

\n", "date_published": "2019-01-15T01:43:28+00:00", "date_modified": "2019-01-15T05:02:52+00:00", "author": { "name": "david" } }, { "id": "http://admin.davidfindlay.com.au/how-to-serve-an-angular-single-page-application-using-django-part-3/", "url": "http://admin.davidfindlay.com.au/how-to-serve-an-angular-single-page-application-using-django-part-3/", "title": "How to Serve an Angular Single Page Application using Django – Part 3", "content_html": "

Here’s part 1 of my series of how to serve an Angular 6 SPA web application in Django, without modifying the Angular CLI generated html. 

\n\n

Putting it all in a Docker Container

\n

I use Docker for development projects as it gives me a clean development environment where I can have all my dependancies isolated from other projects and stuff running on my system. 

\n

Disclaimer: this may not be the best way to do Django and Angular in Docker. I chose this method because I wanted to have my Angular app served by Django to avoid CORS problems and just keep the architecture as close to what it’ll be in production as possible. Obviously it wouldn’t be running on Django’s runserver in production however. 

\n

Here’s how to build a docker configuration to run my Django frontend and Angular backend together in the one container. 

\n

First create a docker-compose.yml file:

\n
version: '3'\r\n\r\nservices:\r\n  db:\r\n    image: postgres\r\n  web:\r\n    build:\r\n      context: .\r\n      dockerfile: Dockerfile\r\n    working_dir: /code\r\n    env_file:\r\n      - web_variables.env\r\n    command: sh devservers.sh\r\n    volumes:\r\n      - .:/code\r\n    ports:\r\n      - \"8000:8000\"\r\n    depends_on:\r\n      - db
\n

This docker-compose file first creates a postgresql database container and then a web container using a Dockerfile. When the container is started it runs the devservers.sh script we created in Part 2.

\n

It mounts the current directory(the django root) to /code within the container. It then exposes port 8000 inside the container to port 8000 outside the container. 

\n

In the Dockerfile put:

\n
FROM python:3\r\nENV PYTHONUNBUFFERED 1\r\nRUN mkdir /code\r\nRUN curl -sL https://deb.nodesource.com/setup_10.x | bash -\r\nRUN apt install nodejs\r\nWORKDIR /code\r\nADD requirements.txt /code/\r\nRUN pip install -r requirements.txt\r\nADD . /code/\r\nRUN mkdir -p /code/static\r\nWORKDIR /code/frontend\r\nRUN npm install -g @angular/cli\r\nRUN npm install\r\nRUN ng build --outputPath=/code/static
\n

This creates a new Docker container from the Python 3 official image. It creates a /code directory in the container.

\n

It then installs node.js as as dependancy for the Angular CLI and installs all the Django project dependancies from the requirements.txt file. 

\n

Finally to test that the environment is ready it copies in the code from the Django root, installs the Angular CLI globally, installs the Angular project dependancies and does a test build of them.

\n

Note that in the docker-compose, we’re telling it to mount the current working directory as a volume on /code. So the /code path in the container will be replaced with the Django root from the host system. 

\n

Really this means that the lines from ADD onwards in the Dockerfile are unnecessary. However I’ve left them in there to just ensure that that we know about it at image build time if the environment won’t be ready to use. 

\n

Using the Docker environment

\n

To start the dev servers run the following command in the Django project root:

\n
docker-compose up -d
\n

As your Django project root is mounted in the Docker container, any changes you make to the Django/Angular project will also be in /code in the Docker container. The Django manage.py runserver will automatically detect any changes on the Django side and recompile and serve them.

\n

Angular build watch will detect changes to the Angular code in <Django project root>/frontend and rebuild them, putting the output SPA into /code/static where it’s served by Django’s static file serving. 

\n

If you add new files or npm dependancies to the Angular project, you’ll need to restart the web container using:

\n
docker-compose restart web
\n

If you add new dependancies to the Django project do a full rebuild with:

\n
docker-compose up --build -d 
\n

In my case I actually run my dev install on a different machine than the one I write code on, so I just upload via Webstorm or PyCharm to the project root on my dev server. The changes are detected and ready to use within seconds. 

\n", "date_published": "2019-01-14T22:16:07+00:00", "date_modified": "2019-01-14T22:19:59+00:00", "author": { "name": "david" } }, { "id": "http://admin.davidfindlay.com.au/how-to-serve-angular-and-django-together/", "url": "http://admin.davidfindlay.com.au/how-to-serve-angular-and-django-together/", "title": "How to Serve an Angular Single Page Application using Django – Part 2", "content_html": "

Here’s how to serve an Angular frontend and Django REST backend together, optionally in the same Docker container. 

\n

Here’s part 2 of my series of how to serve an Angular 6 SPA web application in Django, without modifying the Angular CLI generated html. 

\n\n

Running Django and Angular Auto-build Together

\n

I’ve been working on a project that uses Angular on the frontend and Django REST Framework on the backend. Both Django and Angular have their own development servers that feature auto-recompile on change, which is really handy. 

\n

I wanted to have the Angular app served by Django to avoid Cross Origin Request Validation(CORS) and because Django is handling uploaded files. There are other ways to deal with this in development but this is one way to do it.

\n

The development project is structured in such a way that the frontend angular source root is a sub-directory of the Django project root. 

\n
django_root/\r\n    manage.py\r\n    ... rest of django stuff ...\r\n    frontend/\r\n        angular.json\r\n        ... rest of angular stuff ...\r\n    static/
\n

See my article on How to Serve an Angular SPA in Django for the details of how to configure Django to serve the Angular application

\n

I created a shell script to start the two dev servers and called it devservers.sh:

\n
#!/usr/bin/env bash\r\n\r\npython3 manage.py makemigrations\r\npython3 manage.py migrate\r\npython3 manage.py runserver 0.0.0.0:8000 &\r\n\r\nmkdir -p /code/static\r\ncd frontend\r\nnpm install\r\nng build --watch --outputPath=/code/static/
\n

Note: “/code/” may need to be changed to reflect the path to your django project root. In my case I’m serving this is Docker and installing the Django root in /code.

\n

When the script runs it first makes the Django database migrations, loads them into the database server then runs it’s dev server on port 8000. It then makes sure the static files path has been created, installs any angular dependancies and build watches the angular code. 

\n

Note instead of “ng run” I’m using “ng build –watch”. This watches the angular source code path like the ng run command and automatically builds source when it changes. However instead of serving it on port 4200, ng build deploys the compiled SPA to the outputPath, in this case /code/static/.

\n

Put the devservers.sh script in the django root and run it. It’ll first start up Django and then Angular. Angular will take a minute or so to start up before you can access it. 

\n

You’ll only need to stop and restart the devservers.sh script when new files(or dependancies) are added to the Angular app. This is because the ng build option only looks at changes in existing files, it doesn’t detect new files. 

\n

In the next part I’ll show you how to put this all in a Docker container which is useful for keeping a clean development environment.

\n", "date_published": "2019-01-14T21:39:46+00:00", "date_modified": "2019-01-14T22:16:47+00:00", "author": { "name": "david" } }, { "id": "http://admin.davidfindlay.com.au/how-to-serve-an-angular-single-page-application-using-django/", "url": "http://admin.davidfindlay.com.au/how-to-serve-an-angular-single-page-application-using-django/", "title": "How to Serve an Angular Single Page Application using Django – Part 1", "content_html": "

Here’s part 1 of my series of how to serve an Angular 6 SPA web application in Django, without modifying the Angular CLI generated html. 

\n\n

Configuring Django to Serve an Angular SPA

\n

I’ve been working on a blog/gallery web application that has is using Django to provide a RESTful backend API and has an Angular 6 front end client side. In doing this I wanted to have the Angular single page application served by the Django server. Configuring this was a little complicated as there was no clear complete explanation.

\n

Other tutorials do exist, but they require modifying the files generated by the Angular CLI to fit into Django’s templating and static file system. See this for an example: https://medium.com/swlh/django-angular-4-a-powerful-web-application-60b6fb39ef34

\n

I didn’t want to have to do this. Using Whitenoise and Django SPA, you can serve the Angular application without modification. 

\n

Add the dependancies to the requirements.txt:

\n
whitenoise\r\ndjango-spa
\n

Add whitenoise to the INSTALLED APPS list in settings.py:

\n
INSTALLED_APPS = [\r\n    'whitenoise.runserver_nostatic',\r\n    'django.contrib.staticfiles',\r\n    ...\r\n]
\n

Configure static files in settings.py:

\n
# Static files (CSS, JavaScript, Images)\r\n# https://docs.djangoproject.com/en/1.11/howto/static-files/\r\n\r\nSTATIC_URL = '/static/'\r\nSTATICFILES_STORAGE = 'spa.storage.SPAStaticFilesStorage'\r\nSTATIC_ROOT = os.path.join(BASE_DIR, 'static')
\n

When deploying your web application, create a “static” directory under the Django project base. Use “ng build” to build your Angular web application, then copy everything from the “dist/<your_angular_app>/” directory to the “static” directory in Angular. 

\n

After starting the Django server:

\n
python3 manage.py runserver 0.0.0.0:8000
\n

you should see your Angular SPA start up and run at http://<server>:<port>/.  You’ll also be able to access the application via http://<server>:<port>/static/, however all the urls in there will reference /, so it’s best to use the root url.

\n

You can see a full example of this in the pnb gallery project on GitHub.

\n

In the next part I’ll show you how to get the two dev servers running together to autobuild both Django and Angular while having the Angular SPA served by Django. 

\n", "date_published": "2018-12-16T02:03:25+00:00", "date_modified": "2019-01-14T22:17:23+00:00", "author": { "name": "david" } }, { "id": "http://admin.davidfindlay.com.au/speed-up-deploy-via-scp-in-gitlab-cicd/", "url": "http://admin.davidfindlay.com.au/speed-up-deploy-via-scp-in-gitlab-cicd/", "title": "Speed up Deploy via SCP in GitLab CI/CD", "content_html": "

Deploying thousands of small files via SCP to a server takes an inordinately long time, even via a very fast network, much longer than transferring one large file. Here’s a tutorial on my Gitlab CI setup for compressing all my deployment files into one large tarball, transferring it to to the server then uncompressing it there. 

\n

I have been experimenting with GitLab CI/CD for use with my Swimming Management System projects for Masters Swimming Queensland. It’s a legacy project that I’m now gradually transitioning to modern standards. I’ve set up a Pipeline that will allow me to test commits in PHPUnit, then have them automatically deploy to a dev/test server. 

\n

The project now uses both Composer and with some Angular modules it also uses NPM. In the build phase on GitLab CI, composer install is run to get the dependancies into the vendor directory. 

\n

To do it this way, you’ll need to have shell access via ssh to your server. 

\n

I’ve set up the test server deployment details and authentication credentials as in GitLab Variables:

\n
\"Configuration
Configuration of GitLab CI/CD Variables
\n

In the deploy stage I’ve added the following code prior to upload via scp:

\n
- tar -czf /tmp/build.tar.gz .\r\n- echo \"TEST_SSHPATH=${TEST_SSHPATH}\" > sshenv\r\n- sshpass -e scp -P ${TEST_SSHPORT} -r -o stricthostkeychecking=no sshenv ${TEST_SSHUSER}@${TEST_SSHHOST}:~/.ssh/environment
\n

In this case, the target ‘.’ indicates that we are tarballing the current directory.

\n

In the second line we create a file that contains the definition of an environment variable on the target server, which is set to the GitLab CI variable TEST_SSHPATH. TEST_SSHPATH contains the path to the root of my code deployment on the server.

\n

Finally, we then scp this file to the target server, putting it in ~/.ssh/environment. This will mean that when we ssh into the server, that environment variable will be available to us, containing the value from the GitLab CI variable. 

\n

Now we can transfer the build.tar.gz file and un-tar it. 

\n
- sshpass -e scp -P ${TEST_SSHPORT} -r -o stricthostkeychecking=no /tmp/build.tar.gz ${TEST_SSHUSER}@${TEST_SSHHOST}:${TEST_SSHPATH}\r\n- sshpass -e ssh -p ${TEST_SSHPORT} -o stricthostkeychecking=no ${TEST_SSHUSER}@${TEST_SSHHOST} \"cd ${TEST_SSHPATH}; tar -zxf build.tar.gz\"
\n

The first line in this block does the transfer via scp, to our the path provided in the variable TEST_SSHPATH.

\n

The second line, connects via ssh , changes directory to the code deployment path, then extracts the build files. Tar -zxvf removes the .tar.gz file after it’s done, so there’s no need to seperately delete it.

\n

Before making this change, my swimman project would take 16:30 to build and install. With this change it’s down to 4:35. That’s a saving of 12 minutes which makes a big difference when deploying a quick fix to the test environment. 

\n", "date_published": "2018-11-21T07:48:45+00:00", "date_modified": "2018-11-21T07:58:24+00:00", "author": { "name": "david" } }, { "id": "http://admin.davidfindlay.com.au/monitoring-mqtt-services-in-an-angular-web-application/", "url": "http://admin.davidfindlay.com.au/monitoring-mqtt-services-in-an-angular-web-application/", "title": "Monitoring MQTT Services in an Angular Web Application", "content_html": "

I’m going to show you how to use the Paho MQTT JavaScript implementation in an Angular application to subscribe to a topic and display the message content in a web browser.

\n

I’ve built this as part of my ongoing project to build a custom Weather Station for my home. 

\n

Prerequisites:

\n\n

MQTT Packages for Angular

\n

There’s a few MQTT related packages for Angular integration on npm. I’ve chosen to use ng2-mqtt. It works for me at this point. I did consider using ngx-mqtt, but as it’s build was marked as failing at the time of writing(13/11/2018) I decided to use the simpler ng2-mqtt.

\n

Procedure

\n

First, set up an Angular CLI Project. I’m using Angular 7.0.3. 

\n

Install ng2-mqtt using npm:

\n
npm install --save ng2-mqtt
\n

For simplicity of testing and demonstration, I’m using ng2-mqtt in my AppComponent. As I further develop my weather station display app, I’ll probably move it to a specific service for managing weather data.

\n

Import Paho at the top of your app.component.ts:

\n
import {Paho} from 'ng2-mqtt/mqttws31';
\n

Add some properties for variables to the class AppComponent for storing the incoming data. Change these to match your needs:

\n
windSpeed: Number;\r\nwindDirection: Number;
\n

Also add a private member for the MQTT client. I’ve also put in a variable for storing the IP address or hostname of my MQTT broker. In a real world implementation you’d put this in some sort of configuration management.

\n
private client;\r\n\r\nmqttbroker = 'localhost';
\n

Your AppComponent should implement OnInit:

\n
export class AppComponent implements OnInit {
\n

Then add a ngOnInit implementation:

\n
ngOnInit() {\r\n  this.client = new Paho.MQTT.Client(this.mqttbroker, Number(9001), 'wxview');\r\n  this.client.onMessageArrived = this.onMessageArrived.bind(this);\r\n  this.client.onConnectionLost = this.onConnectionLost.bind(this);\r\n  this.client.connect({onSuccess: this.onConnect.bind(this)});\r\n}
\n

In this code we create a client object and tell it the address of the MQTT Broker and the port number we’re using. I’ve given my client the name ‘wxview’. We then specify callbacks that will be used for when a message is received, a connection is lost, and a connection is established.

\n

Note when setting references to member functions that handle each event, we call the .bind(this) function. This ensures that “this” in the callback function refers to the AppComponent class, not the MQTT Client. For more information, see the Function.prototype.bind() article on Mozilla Developer Network.

\n

Now we can set up the onConnect function:

\n
onConnect() {\r\n  console.log('onConnect');\r\n  this.client.subscribe('wxstation/wind_speed');\r\n  this.client.subscribe('wxstation/wind_direction');\r\n}
\n

This function tells the client to subscribe to specified topics. In my case I’ve used topics that are specific to my weather station project. You can change the topics to be subscribed to for your project specific requirements.

\n

We set up onConnectionLost as per the documentation:

\n
onConnectionLost(responseObject) {\r\n  if (responseObject.errorCode !== 0) {\r\n    console.log('onConnectionLost:' + responseObject.errorMessage);\r\n  }\r\n}
\n

This will just log the error to the browser console if the connection is lost.

\n

Then we build our function to handle any messages received:

\n
onMessageArrived(message) {\r\n  console.log('onMessageArrived: ' + message.destinationName + ': ' + message.payloadString);\r\n\r\n  if (message.destinationName.indexOf('wind_speed') !== -1) {\r\n    this.windSpeed = Number(message.payloadString);\r\n  }\r\n\r\n  if (message.destinationName.indexOf('wind_direction') !== -1) {\r\n    this.windDirection = Number(message.payloadString);\r\n  }\r\n\r\n}
\n

In this function I first log the message topic(message.destinationName) and message(message.payloadString) to the web browser console. 

\n

I then check the topic of the message received and update the appropriate member variable.

\n

To finally put this all together, build a template that binds to these variables:

\n
<div class=\"container-fluid\">\r\n  Wind Speed: {{windSpeed}}<br />\r\n  Wind Direction: {{windDirection}}\r\n</div>
\n

Now when you open your app in the browser, you’ll see the web page continually update as new MQTT messages are received by the broker. 

\n
\n
\n

The Full Code

\n

Check it out on Stackblitz.

\n", "date_published": "2018-11-13T01:41:42+00:00", "date_modified": "2018-11-14T10:00:28+00:00", "author": { "name": "david" }, "attachments": [ { "url": "http://admin.davidfindlay.com.au/wp-content/uploads/2018/11/mqttchanges.mp4", "mime_type": "video/mp4", "size_in_bytes": 38726 } ] }, { "id": "http://admin.davidfindlay.com.au/notes-on-entry-and-result-management-at-the-pan-pacific-masters-games-swimming-2018/", "url": "http://admin.davidfindlay.com.au/notes-on-entry-and-result-management-at-the-pan-pacific-masters-games-swimming-2018/", "title": "Notes on Entry and Result Management at the Pan Pacific Masters Games Swimming 2018", "content_html": "

Last week I attended my 3rd Pan Pacific Masters Games(PPMG) Swimming competition in the role of Chief Recorder. This is a role that I have created and developed over my time as Director of Recording for Masters Swimming Queensland, and which I believe is critically important to the running of successful large swimming meets. 

\n

Preparing to run the meet

\n

In 2018 we had 564 competitors in the PPMG Swimming event. Many of these entrants came from outside of Australia, with a large contingent from New Zealand, New Caledonia and China. This presents a significant challenge to handle. Manual input of entries in to the sports event management software, Hy-Tek Meet Manager, would take many days of work for volunteers. In the past it has been very prone to error. Masters Swimming Queensland has its own online entry system which interoperates with Hy-Tek Meet Manager, but it is usually open only to members.

\n

To handle this, since 2014 I’ve developed tools which allow the data from the PPMG Administration’s entry system to be imported into the Masters Swimming Queensland system. The system uses a multistep approach which allows errors to be detected and dealt with. Every year the PPMG Administration has had a different data format for entries, so for each of the bi-annual events changes have had to be made to the system. 

\n

In step 1, the CSV of the entries is uploaded to the MSQ Entry Manager system and a list of entries is created. In Step 2, matches between PPMG Entrants and masters swimming members known by MSQ Entry Manager are flagged and linked. In Step 3, temporary event memberships are created in the MSQ Entry Manager system for non-members and international entrants. Then in Step 4, individual event entries are created for all entrants in the MSQ Entry Manager system.

\n

Individual event entries include what is known as a seed time. This is the entrant’s estimation of what time they expect to swim in the event. This time is used to put entrants into heats with other entrants of similar capabilities. 

\n

As part of Step 4 mentioned above, I’ve developed Natural Language Processing technology which takes a wide variety of time formats and converts them into the internally used quantity of seconds. For instance, the correct time format for “2 minutes, 34.23” seconds is “2:34.23”, but this may be entered by users as “2:34:23” or “2.34.23”. Or it may be spelled out as “2 min 34.23 sec”. I’ve had an automatic time normalisation system in place for some time, but a newly upgraded version is now able to handle all such formats and correctly understand the intention of the user when they typed in the time. I’ll be publishing a paper on this technique along with a reference implementation in the future. 

\n

From this point onwards the entry data can be handled in the MSQ system in the same way as we handle any swimming meet. Standard checks that I’ve developed were against all entry times, looking to flag times that appeared to be too short(less than 20 seconds per 50 metres) or too long(greater than 2 minutes 30 seconds per 50 metres). I have plans to add automated checks against national and world record times, as well as against individual competitor personal bests, but there was not enough time to get these prepared for the PPMG2018 meet. 

\n

The ultimate result of this was that we had one of the cleanest sets of entry data we’ve ever had for a Masters Games. All errors found in the draft entry lists were due to user error by the entrants. Quite simply they were caused by people typing in the wrong entry time, or selecting the wrong events or entrants not knowing how long it would take them to swim a particular event. 

\n

There were some issues that carried over from the PPMG entry system. Where entrants had edited their entries on the PPMG entry system, the edits were not reflected in the exported data provided to sports organisers by PPMG. However this was easily rectified because I was able to publish draft lists and we had the time and capacity to make changes to entries before the start of the event. We were able to accepted several late entries and late changes, because our entry management systems were so efficient and refined. 

\n

In the final days before the meet, I produced meet programmes for printing and extracted statistics about competitors for use in the handouts to competitors. PPMG Administration required full updates on any changes to the entries for the swimming competition, so I used Trello to manage my workflow. I created boards for To Do, Doing, Waiting, Done, PPMG Informed and PPMG Information Not Required. When a new change request came in via any channel(email, phone, etc), I immediately created a card for it in To Do. Where changes could not be actioned due to further information needed, these were put into Waiting, with notes about the next action required. Once complete each card was moved into Done. From there I made a decision on whether or not PPMG Administration need to be informed. If so, I emailed it to them in the next batch and once done moved the card to PPMG Informed. Otherwise I’d put the card into PPMG Information Not Required, for changes that PPMG Administration didn’t need to know about. This allowed me to keep PPMG Administration fully informed on all changes they needed.  

\n

Unfortunately, there were some data corruption issues in the import this time. Some non-master’s member entrants were imported into the system as female incorrectly. This was quickly corrected before the day the meet started. It was isolated to just a small subset of the entries and they were able to be manually checked. The few that were missed were fixed when entrants checked the draft entry list. Others had club information not import correctly, partially because international masters were non-consistent about how they provided their club details. This would have to be resolved as the meet proceeded. 

\n

During the Competition

\n

During the competition I oversaw all matters related to event entries and results. Actual operation of the timing system(Quantum Automated Officiating Equipment or AOE) and the meet software(Hy-Tek Meet Manager 7) was handled by two highly skilled contracted staff members who work with the venue on a regular basis. 

\n

My role was to act as an interface between Masters Swimming Queensland and the recording staff to ensure that MSQ’s needs were met. I was responsible for changes to the programme, entries and the integrity of the results. 

\n

Where changes were to be made to the programme on future days, I would handle these each night after competition. Where a change was to be made in a future event on the same competition day, this was handled by the recording operator. Changes to the currently running event were delegated to the Marshalling team, who would then inform recording. This approach enables us to ensure that entrants are able to flexibly change their entries as needed. If a competitor arrives late for a heat, marshalling is able to put them into an empty lane from another heat. Provided the information is given to recording in a timely fashion, the scoreboard and result information can be immediately updated to reflect the change and to ensure that the correct person receives the correct change. 

\n

I’ve always taken the approach that if I can accomodate an entrant’s request for a change, I will. I want the competitors to enjoy the event as much as possible, so they’ll want to return again in the future. Arbitrary rules based on perceived data management limitations prevent this. With the right team and the right procedures in place, result data management doesn’t limit changes to sporting event entries. In sporting events where individuals are competing directly by their own performance there is no good reason to not allow changes to programmes right up to the last minute.

\n

Daily Routine

\n

During a large swim meet my start of day routine is as follows:

\n
    \n
  1. Check overnight scratchings and programme change requests. Action where possible.
  2. \n
  3. Produce a Meet Manager backup file for start of day, provide to Meet Manager operator.
  4. \n
  5. Produce Marshalling Sheets and provide to Marshalling, so they can get started with organising events and heats for the day. I also provide Marshalling with two copies of the programme.
  6. \n
  7. Produce Lane Sheets and provide to Chief Timekeeper, so they can be distributed to Lane Timekeepers.
  8. \n
  9. Produce programmes for the refereeing officials as necessary.
  10. \n
\n

This order of processing ensures that the other teams working on the meet get what they need in order of priority. Recording takes the highest priority followed by marshalling. Marshalling needs to have heat swimmers organised 5-10 minutes ahead of their actual heat, so they need their information before other officials. After that the lane timekeepers need to have their paperwork so they can write down information on whether or not there was a swimmer in their lane and any changes to the expected swimmer’s identity. Finally the referees need programmes to know who they have in different lanes. They have the lowest priority however as if they need to they can work simply from heat number and lane number, referring to recording to find out the identity of the infracting swimmer. 

\n

By following this start of day process, even when there are technical delays, I can help ensure the meet can get underway on time. 

\n

Throughout the meet, I ensure that any recording problems are quickly resolved. 

\n

Each afternoon at the end of meet I did the following:

\n
    \n
  1. Get a copy of the backup from the main recording computer.
  2. \n
  3. Produce a report of all the day’s results with splits to be sent to the PPMG Administration and MSQ for posting on their respective websites. 
  4. \n
  5. Export interim results for upload to the MSA Results Portal.
  6. \n
  7. Action updates and changes known for subsequent days.
  8. \n
\n

Relays

\n

The other big task for me in my role as Chief Recorder is overseeing the organisation of relay teams. Normally this has been entirely done on the day at the PPMG. This year PPMG Administration allowed entrants to nominate and pay for relay entries when people entered the PPMG. This presented some challenges.

\n

The MSQ Entry Manager system previously only tracked the overall cost and overall payment of an entrants entry to the entire swimming meet. This would not easily allow us to track relay nomination payments. 

\n

I had to make some decisions about system design and business rules to enable tracking of these nominations and payments:

\n\n

I upgraded the MSQ Entry Manager system to track the cost of event nominations and payments for those nominations. I created an interface to track those payments. I had planned to also allow new nominations and payments to be recorded, but this was not completed in the end due to time constraints and competing priorities. 

\n

An existing interface from previous MSQ meets was used to show the cost of each relay team, and the payments made online for those relay entries. Now that the meet has been completed, I will be exporting these details to Excel spreadsheets so that total amount owed by clubs for relay entries can be calculated and invoiced via PayPal. 

\n

Non-club relay team payments on the day were noted in a receipt book for future reconciliation. It would have been good to have this handled in the MSQ Entry Manager system, but again due to time constraints this wasn’t possible. 

\n

In future events I’ll have this interface prepared and volunteers trained in advance to operate the relay tasks. 

\n

The other part of relay nominations at PPMG meets is actually getting the team information into the Meet Manager system. Relay nominations can be entered directly into Meet Manager, but this is not a user friendly process and requires a second computer linked to the live Meet Manager recording computer.

\n

At my first PPMG, I spent many hours entering paper relay team forms into Meet Manager. This process was laborious and difficult. Some people’s writing was unreadable. Forms were not completely filled out. Entrant names were not able to be found in the entrant list, or entrants had been entered into more than one relay team in the event. After this debacle, I built a new jQuery based relay entry system for PPMG16. 

\n

At PPMG16 the new system mean that the volunteers at the Relay Desk directly entered entries into Entry Manager’s Relay Entries module. It would prevent people being in more than one team, and allow search and selection of relay team members from the competitor list. It enforced relay team rules, for instance club relays were only able to have members from that club, whereas unattached relays could have any entrant in them. The system was very successful at that meet and cut relay entry workload considerably. In the end it proved to be easier for the Relay Desk volunteers to take a paper form and then enter it into the computer later, than processing it in the computer at the time of presentation. However other rules I enforced, such as fully filling out relay forms before they could be accepted and requiring relay team contact phone numbers, meant that the desk was easily able to get all relay teams organised with limited involvement by me.

\n
\"The
The new MSQ Entry Manager Club Relay Teams module
\n

Once relay teams were created in MSQ Entry Manager, they were able to be downloaded as a hy3 file for direct import into Meet Manager. This meant no double handling of the already checked relay entry data and minimal errors. 

\n

This time, there were less volunteers available for the relay desk, so on the first day of relays, I needed to spend most of the morning at the relay desk. This lack of volunteers and the early relay events on Day 2 made the day a bit of a struggle. However the system still performed well. Some international masters member club data corruption issues originating in the import of PPMG entrant data did require a small amount of remediation after import into Meet Manager, but the workload was still considerably less than if we’d done it the old way.

\n

As previously mentioned, we intended to put people who had nominated online for a relay event into random teams if they did not find their own team. We did this on the first day of relay events. However many of the people we put into into teams never turned up at marshalling. On the remaining days we only put people who had presented to the relay desk into teams. There were no complaints about this change and it meant less stranded relay team members. 

\n

The club data corruption also seemed to cause some problems with the scoreboard when relays were imported into Meet Manager. Entries are usually imported into Meet Manager using a hy3 file. Checking the hy3 files showed no differences between a hy3 file exported by Hy-Tek Team Manager and a hy3 file exported from MSQ Entry Manager. Yet after importing relays the scoreboard’s country field showed the club name, instead of country of origin. The issue had not appeared when the same system was used for PPMG16 and the MSA National Championships in 2017 at the same venue. Further analysis and testing will be required to remediate the problem for future events. 

\n

This year I developed and deployed a new online relay entry module was for Masters Swimming clubs to use when registering their relay teams for the PPMG event. Instead of having to go to the relay desk with forms, Masters clubs were able to register a club captain who was then able to use an online interface to register their teams. The module was built using a frontend based on Bootstrap4. As this had to be implemented in our legacy Joomla CMS, the functionality was built using jQuery. Implementing the advanced functionality such a two-way data binding was more difficult in jQuery, but ultimately it was possible to provide a very modern, accessible and easy to use user experience. Over half the relay teams in the meet were registered via the tool and feedback from clubs was very positive. 

\n

I will be reimplementing the new relay system in Angular and be part of the new MSQ Quick Entry system under development for future meets. This will allows us to retire the old Joomla CMS based entry system and give me the ability to implement new functionality more easily. 

\n

Other Recording Functions

\n

Another function I provide during swim meets is the delivery of statistics and meet information to the announcer. Records broken are provided where possible to the announcer to inform the competitors and spectators. This is secondary to my role of ensuring the meet recording runs smoothly. In this particular meet, due to various time constraints and lack of volunteers, I was only able to provide limited updates to the announcer. In future I’d like to organise a dedicated person in the recording team to provide such information to the announcer, PPMG Administration and media as applicable. This would mean that these functions continue even if I’m busy troubleshooting other higher priority issues. 

\n

This meet was the second major event where MSQ has included Multi-class competition. Competitors with disabilities are able to compete in the same heats and events as able-bodied athletes and are scored in their own age group categories. This is something quite new for Masters Swimming in Australia and we still lack sophistication in this area. By and large the multi-class part of the event functioned well, but there were issues in registration and results publishing. Primarily these relate to us just not having a comprehensive understanding of how Meet Manager handles multi-class results, and not yet having a fully developed set of procedures. Through the lessons learnt out of PPMG18, I intend to develop a full set of procedures to be adopted at state and club levels, which will make our operation of future multi-class events easier and trouble-free. 

\n

I’ve made contacts with Victorian clubs who are also involved with multi-class and intend to use the connections to work towards an effective nation-wide approach for multi-class recording in Masters Swimming. 

\n

In Conclusion

\n

Since the end of the event I’ve received a lot of praise for the way the swimming event was run at the Pan Pacific Masters Games 2018. This was a major team effort with huge contributions from Meet Director Shane Knight, MSQ Administrator Christina Scolaro, Susanne Milenkevich, Martin Banks and many, many others. I’d especially like to thank Liala Davighi for her help with relays. 

\n

Over coming months I’m planning to consolidate the lessons learned and start building our systems for the next large MSQ events, starting with State Championships in 2019 and the Great Barrier Reef Masters Games. I hope to build an ongoing team in the recording space to ensure we can have world class data systems that allow MSQ to lead innovation in community sports events. 

\n

Not many people actually realise all the work that goes into running a major swimming meet. There’s been months in the lead up, and there’s still weeks worth of work for me. I still have to provide official results to international Masters Swimming governing bodies and finalise relay reconciliation information to provide to our finance auditors. At least a couple more weeks of work in evenings and weekends outside my full-time job and family responsibilities. Hopefully this helps people understand what goes into running such an event.

\n

 

\n", "date_published": "2018-11-12T01:48:26+00:00", "date_modified": "2018-11-14T09:58:05+00:00", "author": { "name": "david" } }, { "id": "http://admin.davidfindlay.com.au/tasmania-holiday-2018-part-1/", "url": "http://admin.davidfindlay.com.au/tasmania-holiday-2018-part-1/", "title": "Tasmania Holiday 2018 – Part 1", "content_html": "

In late February 2018 we took our young family(2 3/4 years and 7 month old) to Tasmania to visit my Grandmother who lives in Pontypool on the East Coast of Tasmania. This blog post has a short listing of what we did and tips for doing a similar trip with young children.

\n

Parking

\n

Having two young children meant getting a lift to the airport wouldn’t really work for us. Also as we needed our pram and car seats in Tasmania, the AirTrain wouldn’t work either. So we decided to use an airport parking service. As it turned out there was a special on the Brisbane Airport ParkValet service. 

\n

This option was fantastic for us. We were able to drive straight in and had plenty of space to unload the car seats and luggage from the car. There was also a concierge option that we probably would have taken, but it was only offered when we first booked and we couldn’t add it later. We didn’t really need it in the end though.

\n
\"All
All our luggage and car seats unloaded at Brisbane Airport ParkValet
\n

The Flight to Hobart

\n

Our flights were on Virgin Australia, who helpfully let you take any baby related stuff on your flight without any excess baggage charges. We needed to book a seat for Lily as she is over 2, but Jasmine rode in Jacqui’s lap.

\n

We got to board the plane first, with passengers who had special needs. This gave us time to get the kids on board, carry-on stowed away and everyone settled. Jasmine had a special infant seatbelt that attached on to Jacqui’s. She didn’t much like being strapped in and tried to squirm out as much as possible. 

\n

On the way down Lily sat between us and I(David) sat by the window. Lily is prone to being very upset by loud noises such as motorbikes. However she was actually excited by the take off and wasn’t upset at all. We didn’t have any ear problems on the ascent either.

\n
\"David,
David, Lily, Jacqui and Jasmine on our flight to Hobart
\n

We were able to keep Lily amused with toys, colouring-in and for a short while the iPhone. She was a bit annoyed that she couldn’t access Netflix or ABC iView and didn’t like anything on Virgin’s entertain app. 

\n

On descent Lily did get quite upset which we believe was due to pain in her ears. We did try a few things to get her to equalise the pressure but she wasn’t able to understand. She didn’t settle until just before landing.

\n

Arrival

\n

On arrival we waited until everyone else was off the plane to get out, so we could pick up all the lost toys from under our seats. On the tarmac we saw a business jet from the USA that had been equipped with weather research equipment for the SOCRATES project, studying the interactions between clouds and particles naturally produced by the ocean, such as sea salt and biogenic particles.

\n
\"National
National Center for Atmospheric Research aircraft at Hobart Airport
\n

When we walked into the terminal we were right in front of the Melbourne Demons AFL team arriving from Melbourne, so there was a WIN TV crew filming us. We were told that Jacqui and Jasmine appeared in the preview and sports news item about it. 

\n

The Hobart terminal arrivals area is quite small so there was a massive crowd around the baggage carousel when I got there. I managed to find a spot near the end and was surprised that the pram and car seats which were taken in oversize luggage in Brisbane came out on the carousel. 

\n

By the time I’d come back Lily had made a friend in the waiting area. The game had become that their daughter would give Lily a lolly, she’d give it to Mum because she didn’t like it, then Jacqui would pass it back to the little girl’s brother. This went on for some time while I organised the hire car pick up. 

\n
\"Lily
Lily made a friend in the terminal
\n

Hire car pick up was a tag team effort as Jacqui and I swapped duties watching the luggage and filling out paperwork at the Hertz desk. Eventually we were all sorted and we left the terminal. The little girl Lily had befriended was quite upset by this. 

\n

Hire Car

\n

We thought we’d be smart and hire a larger vehicle for our trip. We’d had a struggle fitting our luggage into our Corolla, with one suitcase having to go in the back seat and the other blocking access to the pram in the boot. 

\n

We hired a medium sized SUV, listed on Hertz as a Nissan Qashqai or similar. We ended up with a Mitsubishi ASX. Immediately I noted a problem. There was no way to fit a pram and suitcases in the boot. In fact all it would fit was a pram. Even if we removed the rear parcel shelf cover there’d still be not enough space to fit them and it’d be dangerous without a cargo barrier.

\n

\n

So it turns out a Toyota Corolla sedan actually has more cargo space than a medium SUV Mitsubishi ASX.

\n

It took quite some time to get the car seats installed and adjusted. This was complicated by light rain at the time. One frustrating thing I found was that after I’d installed Lily’s car seat, the rear seatbelt was looped in the wrong place. So I had to try to move the seat forward without completely removing the car seat. After what seemed like forever and several escape attempts by Lily, we got in the car and headed off to go up the East Coast to Grandma’s place.

\n", "date_published": "2018-03-04T21:43:45+00:00", "date_modified": "2018-03-05T09:56:33+00:00", "author": { "name": "david" } }, { "id": "http://admin.davidfindlay.com.au/brat-runs-amok-on-national-tv/", "url": "http://admin.davidfindlay.com.au/brat-runs-amok-on-national-tv/", "title": "Brat Runs Amok on National TV", "content_html": "

\n

Today’s cutesy viral video is from the UK where a mother was being interviewed on TV with her two children. The younger toddler runs around the studio, climbing up on the desk and everyone ignores it. In fact everyone laughs and thinks it was cute!

\n

As the father of two children under 3 I find this behaviour absolutely disgusting. Not the toddler’s behaviour, I know toddlers sometimes do run amok even with the best discipline and training. 

\n

The problem I have with this is everyone’s reaction, especially the mother’s. It’s not okay to ignore such terrible behaviour in public. It’s not funny, it’s not cute, it’s unacceptable. 

\n

You see this in public places every day. Parents are standing in the queue for a bank teller for instance, meanwhile their kids are terrorising the whole bank, climbing on chairs and counters, drawing all over forms and making too much noise. Usually the parents are completely oblivious to their little brats anti-social behaviour. 

\n

Children must be taught that there are times where they must stand still and quiet with their parents. On several occasions I stood at the swimming pool holding my daughter’s hand while she pulled and screamed as we waited for her mother to get changed. She just wanted to run around and play.

\n

However after consistently making her stand still and asking her to be quiet, now when I do this she does stand still with me and remain mostly quiet while waiting. She’ll also sit with me in a chair quietly and wait for long periods of time. Sure I often have to remind her to sit still and quiet, but she will do it most of the time. Yes she tries to test the boundaries, but with constant reinforcement it’s possible to keep her behaving. 

\n

Courteous behaviour in public, respect for other people and their property and waiting are all major life skills that children need to learn. We do nothing to help them learn those skills by laughing at or calling it cute when they misbehave.

\n", "date_published": "2017-08-24T06:03:01+00:00", "date_modified": "2017-08-25T04:55:57+00:00", "author": { "name": "david" } } ] }