Skipole WSGI generator.

Topics:

Introduction Getting Started Your Code skiadmin start_call submit_data end_call Exceptions PageData SectionData skicall Serving wsgi Code Examples

Development at GitHub:

github.com/bernie-skipole/skipole

Serving with the NGINX reverse proxy

In this example NGINX acts as a reverse proxy forwarding calls to a project (called skitest in this example) being served with the Waitress web server at localhost:8000

Install Waitress

You need skipole, your application, and the Waitress web server.

pip install skipole

Install the package 'python3-waitress'

apt-get install python3-waitress

Note: you could also use pip install waitress, as it is available in pypi

Copy your projectfiles directory into /opt (or under any directory of your choice), creating directory:

/opt/projectfiles/

Ensure skitest.py is edited to remove skilift, skiadmin and the development server and waitress is imported and serves your application with the following:


    from waitress import serve
    serve(application, host='127.0.0.1', port=8000)

You could create a username which will run the project, in this example, the user 'www-data' is used.

Give the directory and contents www-data:www-data ownership

sudo chown -R www-data:www-data /opt/projectfiles

Then create a file :

/lib/systemd/system/skitest.service

containing the following:


    [Unit]
    Description=My project description
    After=multi-user.target

    [Service]
    Type=idle
    ExecStart=/usr/bin/python3 /opt/projectfiles/skitest.py

    User=www-data

    Restart=on-failure

    # Connects standard output to /dev/null
    StandardOutput=null

    # Connects standard error to journal
    StandardError=journal

    [Install]
    WantedBy=multi-user.target

Then set permissions of the file

sudo chown root:root /lib/systemd/system/skitest.service

sudo chmod 644 /lib/systemd/system/skitest.service

Enable the service

sudo systemctl daemon-reload

sudo systemctl enable skitest.service

This starts /opt/projectfiles/skitest.py on boot up.

Finally reboot.

Install NGINX

Using apt-get, install the package nginx which includes nginx-common and ngix-full, (ngix-core on Linux Mint).

Use your browser to connect to 'localhost' and you should see the nginx web service running:

        Welcome to nginx!
        
        If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
        
        For online documentation and support please refer to nginx.org.
        Commercial support is available at nginx.com.
        
        Thank you for using nginx.

So nginx is running and serving a default web page. We now need it to proxy requests to port 8000. A Debian based system has two directories:

/etc/nginx/sites-available

/etc/nginx/sites-enabled

You will see under sites-available a default configuration file, and under sites-enabled a link to that file, which is the current enabled default site.

Under /etc/nginx/sites-available create another configuration file skitest.conf:


 server  {

    server_name _;

    listen 80;
    location / {
       proxy_pass http://localhost:8000/;
       proxy_buffering off;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-Host $host;
       proxy_set_header X-Forwarded-Port $server_port;
       }

    }

Then, within directory /etc/nginx/sites-enabled delete the default link, and create a new link to skitest.conf:

rm default

ln -s /etc/nginx/sites-available/skitest.conf /etc/nginx/sites-enabled/

Now reboot the server or restart nginx with command "service nginx restart"

Connecting to your server with a browser should now show your skitest project.

Load balancing

You may want multiple copies of skitest running, either on backend servers or containers, in this case the NGINX proxy can pass calls to these instances.

NOTE: This was tested using a Linux mint server, and with multiple LXC containers running within it to simulate a network of servers.

Under /etc/nginx/sites-available use the following configuration file:


  upstream server_group   {
    server 10.239.52.58:8000;
    server 10.239.52.126:8000;
    }

  server  {
    listen 80;
    location / {
       proxy_pass http://server_group;
       proxy_buffering off;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-Host $host;
       proxy_set_header X-Forwarded-Port $server_port;
      }
  }

The two ip addresses above direct calls to the two backend containers - replace these with your own ip addresses.

Each project in the container has a Waitress serving command like:


    from waitress import serve
    serve(application, host='10.239.52.58', port=8000)

Note: The host ip address matches the ip address of each container. Normally I would put '0.0.0.0' here but found this did not work on lxc containers.