Tuesday, December 09, 2014

Things I've learned (forcibly) on 2014-12-09


  1. There are database systems that are actually implemented using the Julian (sort of) calendar. Why ?!
  2. It is perfectly legitimate to use script types other than type="text/javascript" in a

Saturday, December 06, 2014

Adding less to your MVC 5 project

I'm going to have to just redirect to this post, because I'm in a rush. Hopefully it's still around when you read this.

Note: In order to get this to work, I had my application pool running in 'Integrated Pipeline Mode' and IIS 8 didn't like that, so I had to get rid of the system.web/httpHandlers sections for the dotLess configuration.

Wednesday, November 26, 2014

Creating Azure Services with Visual Studio 2013 and Windows 8.1


  1. Download and install Visual Studio 2013
  2. Download and install the Web Platform Installer, v5.0 or greater
  3. From the Web Platform Installer, install:
    1. Windows Azure SDK and related Powershell utilities and command line tools
  4. Start a new solution (or open an existing one)
  5. From the NuGet package manager in your solution, ensure that you've installed the 'WindowsAzure.Storage' package, or at least have it in your cache. This is going to be required by the New Project wizard when generating the project.
  6. Add a new project
  7. In the 'Add new project' wizard, select the C# projects -> Cloud -> Windows Azure Cloud Service
  8. Follow this absurdly easy tutorial on implementing an ErrorHandler interceptor (behavior)
  9. Add logging to your application using the Microsoft Patterns & Practices Enterprise Library Logging Application Block (which can be found here). 
  10. Create a SQL server database in Azure using this tutorial on MSDN. When designing the user roles and authentication, it's recommended that you use the ASP.NET Identity membership design and design your tables accordingly around this.
    1. Microsoft has provided an extension to the ASP.NET Identity membership framework specifically for EntityFramework. They recommend that you use a code-first model for generating entities.
    2. Log in to Azure through the management portal: http://manage.windowsazure.com/
    3. Configure all of your connections and permissions to the database. e.g. you may want to have multiple users: one for read-only operations, another for read-write operations.
    4. Generate or write your data model. This article will show you how to create a code-first data model with Entity Framework. Ensure that you include users so that you can support proper application authentication and authorization via the ASP.NET Identity membership framework.
    5. The link above also includes instructions on using code-first migrations for when you update your data model.
    6. TODO: Elaborate on how to properly set up the database when performing a code-first database design
  11. Add an IoC container to your WCF service to enable you to easily develop and unit test it. I've chosen to use Ninject because despite a few minutes of inital frustration, it's actually exceedingly easy to integrate with WCF, especially now that there's the Ninject WCF extensions NuGet package. To integration Ninject with your WCF service, perform the following steps:
    1. Install-Package Ninject.Extensions.Wcf -Version 3.2.1.0 (Ninject and Ninject.Web.Common are installed as dependencies for you automatically).
    2. Create a NinjectModule descendant to bind your interfaces to concrete implementations. It should look something like the following:
      namespace MyService.CloudStorage.Support
      {
          using System.Diagnostics.CodeAnalysis;
          using System.ServiceModel;
          using Common.Interfaces;
          using global::Ninject.Modules;
          using global::Ninject.Syntax;
          using Services;
          using NinjectServiceHost = global::Ninject.Extensions.Wcf.NinjectServiceHost;
      
          /// <summary>
          /// A <see cref="NinjectModule"/> descendant for bootstrapping our application
          /// </summary>
          public class MyServiceCloudStorageNinjectModule : NinjectModule
          {
              /// <inheritdoc/>
              [SuppressMessage("Microsoft.StyleCop.CSharp.DocumentationRules", "SA1604:ElementDocumentationMustHaveSummary", Justification = "InheritDoc")]
              public override void Load()
              {
                  this.Bind<IResolutionRoot>().ToConstant(Kernel);
                  this.Bind<ServiceHost>().To<NinjectServiceHost>();
                  this.Bind<IMyServiceStorage>().To<MyServiceStorage>();
                  this.Bind<ILoggingService>().To<MicrosoftEnterpriseLoggingBlockLoggingService>();
              }
          }
      }
      
    3. Create a Global Application Class (global.asax) if one's not already created: Right-click on your project -> Add -> New Item ... and in the window that comes up, go to Visual C# -> Web -> Global Application File
    4. Update your Global class to extend from Ninject.Web.Common.NinjectHttpApplication and override the CreateKernel() method and return a new CustomNinjectModule (as created above). The method should look something like this:
      namespace MyService.CloudStorage
      {
          using System.Diagnostics.CodeAnalysis;
          using System.Web;
          using Ninject;
          using Ninject.Web.Common;
          using Support;
      
          /// <summary>
          /// A global <see cref="HttpApplication"/> class for managing the lifecycle
          /// of the application
          /// </summary>
          public class Global : NinjectHttpApplication
          {
              /// <inheritdoc/>
              [SuppressMessage("Microsoft.StyleCop.CSharp.DocumentationRules", "SA1604:ElementDocumentationMustHaveSummary", Justification = "InheritDoc")]
              [SuppressMessage("Microsoft.StyleCop.CSharp.DocumentationRules", "SA1615:ElementReturnValueMustBeDocumented", Justification = "InheritDoc")]
              protected override IKernel CreateKernel()
              {
                  return new StandardKernel(new NinjectSettings(), new MyServiceCloudStorageNinjectModule());
              }
       }
      }
      
    1. Update your .svc file for your service(s) and add an extra XML attribute that looks like this: <%@ ServiceHost Service="HelloNinjectWcf.Service.GreetingService" Factory="Ninject.Extensions.Wcf.NinjectServiceHostFactory" %>
  12. Implement your business logic
    1. If you're designing your service properly, you'll need to ensure that communications between clients and your service are secure. Toward that end, you'll need to use SSL to encrypt your communications. There's a number of things involved in this:
      1. Generate certificates for your server and your client
      2. Ensure that your IIS server is correctly configured to use 'https' bindings on your site, along with the server-side certificate that you've generated.
      3. If you're using Windows Store Apps to access a WCF service, you'll need to ensure that you use a CustomBinding correctly configured with the right *BindingElement objects to create an SSL secured, HTTPS-transported binding.
        • For the server, in your service's concrete implementation, you'll need to remove any .config file configuration, and add a method specified according to a convention that WCF recognizes that looks like the following:
          /// <summary>
          /// The service certificate store name
          /// </summary>
          private const StoreName ServiceCertificateStoreName = StoreName.My;
          
          /// <summary>
          /// The service certificate store location
          /// </summary>
          private const StoreLocation ServiceCertificateStoreLocation = StoreLocation.LocalMachine;
          
          /// <summary>
          /// Configures the specified configuration.
          /// </summary>
          /// <param name="config">The configuration.</param>
          /// <remarks>
          /// A service endpoint configuration method determined by convention.
          /// <see href="http://msdn.microsoft.com/en-us/library/hh205277(v=vs.110).aspx"/>
          /// </remarks>
          public static void Configure(ServiceConfiguration config)
          {
           ServiceEndpoint serviceEndpoint = new ServiceEndpoint(
            ContractDescription.GetContract(typeof(IMyServiceStorage), typeof(MyServiceStorage)), 
            new CustomBinding(
             new TransportSecurityBindingElement(),
             new SslStreamSecurityBindingElement { RequireClientCertificate = false },
             new TextMessageEncodingBindingElement(MessageVersion.Soap12WSAddressing10, Encoding.UTF8),
             new HttpsTransportBindingElement()
            ), 
            new EndpointAddress("https://localhost/MyService.CloudStorage/MyServiceStorage.svc")
           );
          
           config.AddServiceEndpoint(serviceEndpoint);
          
           const string ServiceCertificateThumbprint = "[a 40 digit hexadecimal certificate thumbprint here]";
          
           X509Store certificateStore = new X509Store(ServiceCertificateStoreName, ServiceCertificateStoreLocation);
          
           certificateStore.Open(OpenFlags.ReadOnly);
          
           X509Certificate2Collection x509Certificate2Collection = certificateStore.Certificates.Find(
            findType: X509FindType.FindByThumbprint,
            findValue: ServiceCertificateThumbprint,
            validOnly: false
            );
          
           certificateStore.Close();
          
           X509Certificate2 serviceCertificate = x509Certificate2Collection.Cast<X509Certificate2>().FirstOrDefault();
          
           if (serviceCertificate == null)
           {
            throw new ConfigurationErrorsException(String.Format("No certificate representing the service with thumbprint {0} could be found in {1} store at {2} location", ServiceCertificateThumbprint, ServiceCertificateStoreName, ServiceCertificateStoreLocation));
           }
          
           ServiceCredentials serviceCredentials = new ServiceCredentials
           {
            IdentityConfiguration = new IdentityConfiguration
            {
             // TODO: Change this to authenticate the clients
             CertificateValidationMode = X509CertificateValidationMode.None
            },
            ServiceCertificate =
            {
             Certificate = serviceCertificate
            }/*,
            ClientCertificate =
            {
             // TODO: Resolve the client certificate
             Certificate = serviceCertificate
            }*/
           };
          
           config.Description.Behaviors.Add(serviceCredentials);
           // config.Description.Behaviors.Add(new ServiceMetadataBehavior { HttpGetEnabled = true, HttpsGetEnabled = true });
          }
          

        • For the Windows Store App client, you'll need to have a similarly configured counterpart channel factory and binding for connecting to the service:
          this.channelFactory = new ChannelFactory<IMyServiceStorageChannel>(
           binding: new CustomBinding(
            new TransportSecurityBindingElement(),
            new SslStreamSecurityBindingElement(),
            new TextMessageEncodingBindingElement(MessageVersion.Soap12WSAddressing10, Encoding.UTF8),
            new HttpsTransportBindingElement()
           ),
           remoteAddress: new EndpointAddress(serviceUri)
          );
          
          this.passwordVault = new PasswordVault();
          
          // This IDispatchMessageInspector is a custom addition for our own brand of authentication
          this.channelFactory.Endpoint.EndpointBehaviors.Add(new ClientAuthenticationDispatchMessageInspector(this.passwordVault));
          
        • For the Windows Store App, you'll also need to have the server's public key (assuming it's not trusted, e.g. a self-signed certificate) added to the app's certificate declarations in the package manifest. There's a video on how to do it here on Channel 9. For the sake of convenience, I'll reproduce the steps here:
          1. Obtain your server's public key in DER-encoded .cer format.
          2. Open the Package.appxmanifest file for your App in Visual Studio
          3. Go to the Declarations tab.
          4. Under the 'Available Declarations' box, select 'Certificates' and click 'Add' if there's no Certificates declaration already added.
          5. Select the 'Certificates' declaration in the 'Supported Declarations' box.
          6. In the 'Certificates' group on the large pane, click 'Add New'.
          7. In the certificate parameters box that comes up, enter 'Root' in the 'Name' field, and select your public key file. Once you do this, Visual Studio will automatically import it into your project.
          8. Save the manifest.
  13. Publish it to Azure Services
    1. If you've designed your application correctly, you'll be using HTTPS for communicating with your clients. Read Microsoft's guide on MSDN to uploading a certificate with your service.
  14. Implement your Windows 8.1 client // TODO: Elaborate on this
  15. Unit test your Windows 8.1 client on your own local machine.
    1. Visual Studio 2012 / .NET 4.5 added the ability to do asynchronous unit tests to MSTest!
    2. In order to unit test the Windows 8.1 Metro client against services on your localhost, you'll have to read this. It describes the new security features in Windows 8(.1) and how to explicitly enable your application to communicate through the network loopback interface. 
    3. You can also read this. Bottom line, you have to ensure that you enable a loopback exemption for your unit tests.
  16. Unit / integration test it against your Azure service
    1. Ensure that you've created a 'Staging' area in your Azure configuration panel and you're not testing against production! Testing against production is extremely poor practice!
  17. Publish your App on the Windows Store if you so choose


Tuesday, October 28, 2014

Correcting the default check-in action for associated work items in Team Foundation Server

I've found that it's a giant pain in the ass in TFS when associating work items to checkins because the default assocation action is 'Resolve' rather than 'Associate'. When I'm working on features, I like to do bits of functionality in units and check them in to source control in small batches to make my changes more manageable. Unfortunately, many times I've forgotten to change the association action when associating work items with change sets and it's resolved my issue instead of just associating the work item with the change set. This becomes problematic because it skews my records and metrics for how much time is spent working on a task. Fortunately one can change this in the registry:

HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\[Version Number]\TeamFoundation\SourceControl\Behavior

Quick fix.

Friday, October 24, 2014

Regarding pre-test invocation scripts when deploying to a Test Agent with TFS

So, I learned something interesting about TFS Test Agents today and they way they handle pre-test invocation scripts when running tests on a Test Agent. In Microsoft Test Manager, do the following:

  1. Connect to a Team Project
  2. Change into 'Lab Center' mode
  3. Click on 'Test Settings' in the top bar-ish area. This will open up the Test Settings Manager.
  4. Edit or create a new 'Test Settings' item and open it up.
  5. In the 'test settings' editor, under the 'Steps' column on the left-hand side, go to 'Advanced' -> 'Scripts'. You're now presented with the scripts page where you can specify scripts to be invoked before and after the execution of your test run.
Now, **here's the important thing** :
The script file(s) you specify in these boxes do not get copied to the Test Agent per se. Instead, their contents get read and merged with an automatically generated script that's created by the Test Agent. The script that actually gets run on the Test Agent will look similar to the following:

REM **************************************************************************** 

REM *  Generated by Microsoft Visual Studio 

REM *  Copyright (c) Microsoft Corporation. All rights reserved. 

REM *   

REM **************************************************************************** 

set ResultsDirectory=C:\Users\Autotest\AppData\Local\VSEQT\QTAgent\23620\00C4EF~1\Results 

set DeploymentDirectory=C:\Users\Autotest\AppData\Local\VSEQT\QTAgent\23620\00C4EF~1\DEPLOY~1 

set TestRunDirectory=C:\Users\Autotest\AppData\Local\VSEQT\QTAgent\23620\00C4EF~1 

set TestRunResultsDirectory=C:\Users\Autotest\AppData\Local\VSEQT\QTAgent\23620\00C4EF~1\Results\00C4EF~1 

set TotalAgents=1 

set AgentWeighting=100 

set AgentLoadDistributor=Microsoft.VisualStudio.TestTools.Execution.AgentLoadDistributor 

set AgentId=1 

set TestDir=C:\Users\Autotest\AppData\Local\VSEQT\QTAgent\23620\00C4EF~1 

set BuildDirectory=\\SOMESERVER\SomeShare\TFSDrops\HRST\CWSTAM~1.SPR\CWSTAM~3.1 

set DataCollectionEnvironmentContext=Microsoft.VisualStudio.TestTools.Execution.DataCollectionEnvironmentContext 

set TestLogsDir=C:\Users\Autotest\AppData\Local\VSEQT\QTAgent\23620\00C4EF~1\Results\00C4EF~1 

set ControllerName=TSTTFSTHR01:6901 

set TestDeploymentDir=C:\Users\Autotest\AppData\Local\VSEQT\QTAgent\23620\00C4EF~1\DEPLOY~1 

set AgentName=00C4EFD8-A65F-4B5E-AD5F-04F93895B543 

REM **************************************************************************** 

REM *  User Commands 

REM *   

REM **************************************************************************** 

echo "My actual user commands from my script file here"


This is important to keep in mind when you're writing the commands in the script to be executed, because no files get copied with the script file you reference in the Test Settings, and there's no mention of where that script file originally came from, so the script is effectively executed without any context except for that which is given to it by the TFS Test Agent in the prefixed lines.

Sunday, October 19, 2014

A huge annoyance in Windows 8 store apps

Apparently Bindings are no longer TwoWay by default as they were in WPF. They're now OneWay, which was a huge annoyance and wasted a solid half hour of my time trying to debug my bindings.

Monday, September 01, 2014

Creating a WPF app with Microsoft Prism Framework 5

To get started, do the following:

Wednesday, August 20, 2014

Getting an "RPC endpoint not found/not listening" exception when connecting to a remote machine with PowerShell

Lately I've been dealing with a lot of remote management for the purposes of automating our deployment process for the product on which I'm working. I've been able to connect other (pre-configured) machines, but when I wanted to connect to my own machine in unit tests, I've been unable to do so until now. Each time I try to connect, I'd get an exception along the lines of "The remote RPC server is not responding". I double checked that my "Windows Remote Management (WS-Management)" service is up and running, so I was perplexed as to why I still couldn't connect. I had turned off my firewall (temporarily, of course), and as if that wasn't enough, I'd explicitly enabled the rules for Windows Remote Management. As it turns out, (at least when you're running Windows Server 2008 R2) the service runs by default, but is not configured to allow remote management by default. (Totally makes sense, right ? /sarcasm) To remedy this, you need only run the following under and Administrator command line:

winrm quickconfig

This will enable your machine to accept incoming connections. You should also ensure that your firewall has been properly configured to allow the remote management rules (pre-existing, come with Windows). Also make sure that your service is actually running.

Saturday, August 09, 2014

Creating a certificate chain of self-signed certificates for development / testing / private environments

As anybody who's ever tried to develop secure services with SSL knows, it's expensive to buy trusted certificates from a certification authority. This is especially true if you're an independent developer who doesn't have a lot of resources. Therefore, we need to be able to generate self-signed certificates in order to develop and test our code before we actually go buy a Trusted Certificate for production. This tutorial will show you how to create a chain of trust and start generating certificates from a self-signing authority. The information here is based off of Microsoft's documentation on MSDN about the matter.


  1. Create a signing authority certificate:
    • makecert -n "CN=My Signing Authority" -r -sv MySigningCert.pvk MySigningCert.cer
    You'll be prompted for passwords for securing the private key. Ensure that you remember them, you'll need them to create the merged file.

  2. Merge the private key file and public key file into an encrypted key (this isn't mentioned in the MSDN article linked above, but you can find the documentation here):
    • pvk2pfx /pvk MySigningCert.pvk /spc MySigningCert.cer /pfx MySigningCert.pfx /pi mycertpassword /po mycertpassword /f
    This step isn't necessary for signing site certificates, but does make things more convenient for storing the certificate and installing it on different machines. Be careful: you should never leave keys laying around file systems on machines, they should always either: a) be stored in an encrypted store like that provided by Windows, or b) be stored on separate storage media that can be physically locked away with access only available to trusted personnel.

  3. Start creating site certificates with your signing certificate:
    • makecert -iv MySigningCert.pvk -n "CN=www.mywebsite.com" -ic MySigningCert.cer -sv sitekey.pvk sitekey.cer -pe
    Now, as above, I recommend that you merge the .pvk and .cer into a .pfx for easy transport and storage.

Thursday, July 17, 2014

Retargeting a Windows 8 application to Windows 8.1

Apparently, fuck Windows 8. So says everybody. Including Microsoft. That's why at some point you're going to have to retarget your Windows 8 app (if you were crazy enough to make any) for Windows 8.1. Fortunately, Microsoft provides a guide for doing so in Visual Studio 2013 here. Fortunately, it's as simple as right-clicking on your solution in the Solution Explorer and clicking on Retarget for Windows 8.1

Tuesday, July 15, 2014

Making TFS builds consistent with desktop builds when invoking MSBuild directly on a .*proj file

As it turns out, MSBuild has more than a few quirks when being invoked through TFS compared to being invoked through a command line or from Visual Studio. Some of them are pretty well documented. Others are not, like the fact that in a .*proj file, the OutDir variable is inherited to sub-MSBuild tasks.  There's also quirks because OutputPath is used to determine OutDir, but not in all cases. If you're going to specify OutputPath in the properties when invoking the MSBuild task, you should also explicitly override the OutDir variable as well to ensure consistency, unless you **TRULY** understand the differences between the two and how MSBuild determines OutDir, and you **REALLY** want it to be that way.

Thursday, June 26, 2014

Getting code signing to work with ClickOnce on a TFS Build Agent

Code signing is a giant pain in the butt. You have to :
  • Obtain the certificate for signing the code by:
    1. buying the certificate from an issuer.
    2. generating your own self-signed certificate
  • Configure ClickOnce within your project file with the following property elements:
    • <signmanifests>true</signmanifes>
    • <manifestcertificatethumbprint>A387B95104A9AC19230A123773C7347401CBDC69</manifestcertificatethprint>
  • Log into your machine **as the user running the build controller / agents ** and import the key to their user Personal certificate store!
    • Run 'certmgr.msc' from the Run command in the start menu (WinKey + R is the hotkey)
    • In the Certificate Manager that comes up, go to Personal in the tree, right-click, and select All Tasks -> Import ...
    • In the Certificate Import Wizard window that comes up, select Next to move to the 'File To Import' screen.
    • Select your certificate file, which has the same thumbprint as specified in your project file, then click Next to move to the 'Certificate Store' screen.
    • In the 'Certificate Store' screen, select the 'Place all certificates in the following store' option, then click Browse to select the store. Choose 'Personal' in the selection window. Click Next to move to the "Completing the Certificate Import Wizard" window.
    • On the "Completing the Certificate Import Wizard" window that comes up, click Finish to import the certificate.
You should now be able to build and sign your code on a TFS Build controller / agent.

Sunday, June 22, 2014

Converting an existing Windows Store app to using the Prism Framework

I began converting an existing Windows Store App to using the Prism Framework provided by Microsoft. However, I'm running into the following error:

The primary reference "Microsoft.Practices.Prism.StoreApps" could not be resolved because it was built against the ".NETCore,Version=v4.5.1" framework. This is a higher version than the currently targeted framework ".NETCore,Version=v4.5".

This post on stackoverflow.com recommends installing the Microsoft Build Tools 2013 package, which is available here:
http://www.microsoft.com/en-ca/download/details.aspx?id=40760

That didn't work.

I later realized that I had installed Prism with NuGet, so I went and checked the publishing dates on the versions. The latest (and default, which I had installed) was 1.1.0. The date on 1.0.1 was much older, and after reverting to that version, I was able to get my program to compile and run with a few modifications to the steps in this tutorial. The modifications are as follows:

  • Change the return type of the App.OnLaunchApplication method to 'void' to match the 1.0.1 version of the Prism.StoreApps library.
  • In the App.OnLaunchApplication method, ensure that there's a call to :
    • NavigationService.Navigate("Main", null); where "Main" is the initial page name, and there's a MainPage class in your Views folder.
  • Move the existing MainPage class into the Views folder in the root of the project.

Creating my first Windows 8 store app

As you may or may not be aware, there are multiple types of applications that can be created for Windows 8:

  • Windows store apps, which use the new Metro interface
  • Desktop-based apps which are like those created for previous versions of Windows that can still run in the Desktop app.
I'm quite familiar with creating WPF apps for Windows, but Metro apps are new, and those are what I'll be working on. With that in mind, Microsoft provides the Prism framework which helps provide additional classes, interfaces, events etc to help people develop Windows Store apps that keep consistent with Windows 8 design principles and help the apps perform properly. I'll be starting with the MSDN link here.

Beginning to work with Windows Store apps

I really hate Windows 8. I think the majority of the applications that have been written for it are complete pieces of shit, for the following reasons:

  • The developers who wrote them didn't pay any attention to Microsoft's best practices and they :
    • perform poorly
    • don't follow UI conventions and are hard to understand as a result
    • crash 
    • don't always save data properly
  • Many are piss-poorly written and adapted by third party developers for first-party systems because those first-parties don't want to write software in a competing ecosystem, and instead want to force users to use their ecosystem, which has their own set of flaws and deficiencies. Case in point: Google. At the time of this writing, there are no native Windows 8 applications put out by Google. There's no native YouTube app for Windows 8 (which there damn well should be), presumably because those fuckers couldn't find a good way to generate advertising revenue in a Windows 8 app. (can't really blame them for that because if I see ads in an app, I immediately delete it from my device without hesitation. I can't stand that shit.)
  • Windows 8 is a shit operating system. It was built on the new Modern interface (aka Metro), and initially had piss-poor integration with the desktop paradigm on which all previous incarnations of Windows were based. Add to this the fact that Microsoft didn't give people an easy choice of which paradigm they wanted to use right off the bat, and the fact that in successive iterations like Windows 8.1 they've tacked on hacky additions to make the Metro interface more like the previous desktop interface, you end up with a shitty operating system that's a pain to use; this pain stems from the fact that it's a horrible amalgamation of multiple user interface paradigms. 
As long as Microsoft continues to force their shitty iterations of Windows on the world, I, as a software developer, will be forced to deal with it because of the immense investment most employers have in Microsoft technology. With that in mind, I'm going to start learning Windows 8 applications so that I can make myself more marketable to employers everywhere. I'm going to document my learning here for my usual reasons:
  1. So that I have a reference for myself for the future
  2. So that others may learn more easily what I have learned.

Tuesday, June 17, 2014

Resolving ssh: connect to host xxx.xxx.xxx.xxx port 22: Connection refused

There are a number of reasons why an SSH server may fail to allow a client to connect. Many aren't readily apparent, even from tailing system log files or using ssh -v on the client. Here are some of the ones I've encountered:

1. Incorrect permissions / ownership on the key files in /etc/ssh/
2. Incorrect permissions / ownership on the ~/.ssh/id_rsa private key file of the user as which we're trying to connect. I'll add more to this list as I encounter them.
3. systemd just plain being a piece of shit. Running 'systemctl restart sshd.socket' has fixed the problem in the past.

Wednesday, May 28, 2014

Apache fails to handle requests with "libgcc_s.so.1 must be installed for pthread_cancel to work"

I recently had an apache2 server go down while I was using it. I still don't know what caused it, but I do know how I fixed it, thanks to this thread on Launchpad. Adding libgcc_s.so.1 to the ld pre-load got me back up and running.

echo "/lib/i386-linux-gnu/libgcc_s.so.1" >> /etc/ld.so.preload
ldconfig
The value echoed is the path to the libgcc_s.so.1 file on your system. It can be found with:
gcc --print-file-name=libgcc_s.so.1
I hope this helps anybody who has a similar problem.

Thursday, April 03, 2014

Waiting for the network to be up on the BeagleBone Black

It finally hit me today as I was reading over this article, and it should have hit me sooner. The article mentions that in order to get past the network startup hurdles in systemd, you need to wait on the NetworkManager service. However, I use connman. It didn't occur to me until today that they provide exactly the same functionality, and I just needed to swap one with the other. Now,  I have all my network dependent programs simply After=connman.service in the systemd service descriptor files, and they're golden.

ImportError: No module named pkg_resources when trying to use pip on the BeagleBone Black

I've recently been trying to use Python on the BeagleBone Black, for a number of reasons:

  1. I want to learn a new language which could be useful to me in another job in the future (and this job, even better!)
  2. Given all the effort I've put into making our embedded systems work on multiple platforms, I think I've finally got enough infrastructure in place that we can start leveraging other cross platform products to shorten up our development time.
  3. As a scripting language capable of using bindings to other languages, Python should help me create very functional code that doesn't need extremely high performance in a very short amount of time and reduce my development time for complicated tasks.
Unfortunately, like many other things on the BeagleBone Black, things aren't going as well or as simply as you'd think they should at first glance. For starters, the pip package manager for Python isn't installed by default on the BeagleBone Black (at least not as of the 2012.12 image). So, I had to install that first:

opkg install python-pip

When I tried to run the package manager, I ran into the following error:

Traceback (most recent call last):
  File "/usr/bin/pip", line 5, in
    from pkg_resources import load_entry_point
ImportError: No module named pkg_resources

After Googling around for a bit, I found these questions on Stack Overflow. Apparently you must also have the setuptools package installed in order to be able to use pip because it's not a simple package manager like apt in Ubuntu. It's more like emerge in Gentoo, where it downloads code packages and is capable of compiling them and performing custom installations. Fortunately, there's an opkg package for that:

opkg install python-setuptools

After that was installed, it became a simple matter of finally importing the actual package that I originally wanted that started all of this:

pip install psutil

... the process utils library for Python

Friday, March 21, 2014

Undefined reference to `log' when compiling on Ubuntu 13.10 with gcc

After I upgraded to Ubuntu 13.10, I inexplicably started getting errors in a build that had been perfect for a very long time. It turned out, there was a significant change in the linker and a bug introduced. The fix is described beautifully on this blog post, but for convenience sake:

Add '-Wl,--no-as-needed' to your LDFLAGS

Monday, March 17, 2014

Slow login times in Ubuntu 13.10 (not just SSH)

I recently setup a new install of Ubuntu 13.10 for a server and found that a lot of my login times were slow when remotely logging in (and not just via SSH). The culprit turned out to the the /etc/nsswitch.conf file:

This line:
hosts: files mdns4_minimal [NOTFOUND=return] dns mdns 4 mdns

Should be changed to this line:
hosts: files dns

... to resolve the issue

Thursday, March 13, 2014

Developing a cape for the BeagleBone (Black)

Due to the needs of our company, we're developing an in-house cape for the BeagleBone Black to integrate it with our equipment. Yes, there are numerous capes already in existence, but they each provide a singular function that would require stacking in combination with other capes to meet our needs. Additionally, it increases the number of external vendors on which we must rely to meet our demands. Instead, we've chosen to create a custom cape that has multiple extensions on the same cape and integrates nicely with our existing board stacks. Unfortunately, I'm not that familiar with BeagleBone Black capes, so I'm going to be learning how to create the software necessary for a new cape and configure it using the Device Tree system on the BeagleBone Blacks. I'll be tracking my progress and things that I've learned on this blog for my own future reference and hopefully it'll even help out somebody else.

Friday, January 31, 2014

Recovering the root password on a BeagleBone Black

I recently went to use a BeagleBone Black board on which I'd never booted from the eMMC before (that I could recall) and had been instead booting off of a microSD card. To the best of my knowledge, the root password on this board should never, ever have been changed from the default (blank password), but apparently it was. None of the passwords that I had ever used for any of my BeagleBone Blacks was working, which created a problem: I needed to recover the root password to this BeagleBone Black so that I could use it for my projects again. I was in luck. The BeagleBone Black has the tools that I needed to do it. I'm writing this blog post (like many other posts in the blog) so that I can have a reference to come back to if I ever need it in the future.

To recover the root password of a BeagleBone Black, you'll need the following items:

  • A 5V, minimum 1 A dedicated power source (you shouldn't really be powering the board off of the micro USB port)
  • An SD card (and reader, of course)
  • An FTDI USB-to-serial cable for accessing the debug serial port of the BeagleBone Black.
Once you've acquired all of the required items above, perform the following steps:

  1. Flash the SD card with an image that you can obtain from BeagleBone.org/GettingStarted.
  2. Insert the SD card into the *unpowered* BeagleBone Black.
  3. Apply power.
  4. The BeagleBone Black should boot from the SD card (assuming that you've flashed the correct image)
  5. Connect the FTDI serial cable to the board and your compter, and open your serial client of choice to connect to the board.
  6. Hit enter once or twice, and a command prompt should come up, assuming you've used the correct settings:
    1. 115200
    2. 8N1
  7. Log in using root and a blank password (should be the default on the SD card image that you downloaded from the BeagleBoard.org website.)
You're now logged into the image running on the SD card. However, we can't change the root password now because that will only change it for the root user on the SD card image. We need to change it for the permanent image stored on the BeagleBone Black's embedded eMMC flash. Being logged in as you are, perform the following steps to mount the eMMC image and change the password for that image's root user:

  1. Mount the eMMC flash:  mount /dev/mmcblk1p2 /media/card
  2. Change the root of the file system to be the partition on the eMMC flash which you just mounted: chroot /media/card
  3. Change the password of the root user to something you know: passwd root
  4. Exit out of the changed root: exit
  5. Shutdown the BeagleBone Black : shutdown -h now
  6. Disconnect the power from the board
  7. Eject the microSD card.
  8. Reconnect the power to the board
  9. Watch the board boot up, and log in as root. You should be able to log in with the password that you just set.

Implicit rules with Makefile

In my ongoing quest to make my builds less complex and faster, I've been going through my Makefiles and trying to learn as much as possible to simplify it and leverage Make as much as I can. To that end, I discovered something incredibly useful today that I suppose I would have known had I taken the time to read the man page for Make:

make -p


This command will list all of the implicit rules for make, which you can then use to optimize the living crap out of your Makefile.

Thursday, January 02, 2014

Installing NTP on an Ubuntu server

NTP is used to synchronized time between machines. You can read the Ubuntu HOWTO here.

Installing a TFTP server using xinetd on Ubuntu

I recently had need to redo an old server that had been running an ancient Gentoo installation for which there was no longer a software upgrade path, so I chose to install Ubuntu on it. Part of the requirements of the server were that it runs a TFTP server for hosting files for configuring embedded devices. I had previously found an article on how to setup TFTP through inetd, but I could find it, so I'm cobbling this tutorial together from various sources.

  1. # apt-get install xinetd tftpd
  2. Ensure that the following lines are in the /etc/services file:
    1. tftp     69/tcp
    2. tftp     69/udp
  3. Open the /etc/xinetd.d/tftp file (create it if it doesn't exist) and ensure that it contains the following:
    # default: off
    # description: The tftp server serves files using the Trivial File Transfer \
    #    Protocol.  The tftp protocol is often used to boot diskless \
    #    workstations, download configuration files to network-aware printers, \
    #    and to start the installation process for some operating systems.
    service tftp
    {
        socket_type     = dgram
        protocol        = udp
        wait            = yes
        user            = root
        server          = /usr/sbin/in.tftpd
        server_args     = -s /tftpboot
        disable         = no
    }
    
  4. # /etc/init.d/xinetd restart
  5. Test the server:# tftp localhost
    tftp> get hello.txt
    Received 23 bytes in 0.1 seconds
    tftp> quit 
You should now be good to go. The (abridged) instructions for this were retrieved from this article.