Saturday, February 3, 2018

Did a really brick my graphics card with that bios flash?

So over the holidays I got excited about cryptocurrency mining and decided to build myself an alt - coin mining rig.  This was going to be a first for me, so naturally I was going to learn a lot and make a ton of mistakes.  I might write more fully about the whole process later, but I though the most interesting thing I learned was how to recover a couple of cards that I thought I had bricked (during an attempted bios flash), and the mistakes I made to get them into that bad state.

I decided on using AMD Radeon RX 570's mainly because (1) they were available and (2) they had a  reasonable power consumption to hash ratio based on what I was reading.

So for my first cards I got 2 MSI gaming cards.  One had 4 GBs of ram and the other had 8 GBs of ram.  My windows 10 box had 2 slots and a large enough power supply to run both, as I hadn't built a rig yet I just dropped them in and started running.

I was pretty disappointed with the initial results.  I was getting ~19 MegaHash out of the 8 GB card and ~21 out of the 4 GB card.   Not to fear though, I had read you could switch the Radeon graphics driver into "Compute" mode.  This was super easy and got the 8 GB card doing ~21 MH while the 4GB card moved up to ~24MH.  I let the cards run like this for a few days as I researched ways to improve my hash rate further.

Not long after that I read about the Holy Grail for improving performance,  modding and flashing your bios.  I read all kinds of things that suggested a modded bios on a 570 might get me all the way to 30 MH.  That was a pretty big possible improvement, so I had to give it a try.  I saved my original bioses and then I tried some simple mods.   The Polaris bios editor offered some one click timing mods that involved adjusting the memory straps.  I decided I would start out with these changes.  So after modding and saving the new bioses, I flashed my cards with winatiflash.  As a final step I also patched my Radeon driver (see: so that windows would recognize my cards after the bios flash.  I then rebooted.


I should say I tried to reboot.  Instead I experienced a heart stopping nightmare as windows would not BOOT!  After I freaked out a bit and rebooted several times with the same result I tried to boot with one card at a time.   I discovered that I could actually boot with the 4GB card installed, but the video had all kinds of artifacts,  and the card wasn't recognized when I tried to run my mining software.

Well I figured at least I could re-flash this guy and get it back to the old state..  So I opened winatiflash load the old bios and flashed it.

Reboot and .... Nothing.  Same artifacts, same failure to be recognized.

What followed as a series of reboots, driver re-installs, and eventually reading about the 1 + 8 pin trick that would supposedly reset the bios on the Radeon RX 480 (see:  I was hoping this would work on the 570, so I gave it a shot.  More precisely I gave it several shots.... On both cards.   No dice.  I couldn't get anything good to happen.  So in a last ditch effort to save 2 increasingly expensive cards from the trash heap, I found an e-bay store that sold actual bios chips.  I am not practiced at soldering at all, but I had been at this in spare time for more than a week at this point.  I figured I couldn't make them less useful to me than they already were.

When I inquired as to whether this 'bios store' had chips for my cards they responded by asking me what memory type the cards had (Elplida, Samsung, etc).  I wasn't sure I remembered so I opened up the original bios for the 4GB card in a hex editor to try and see if I could figure out what type of memory was used.  As I was muddling through this, I saw something that stopped me:


Wait a minute, this is the original 4GB rom.... Isn't it?  Checking the filename, I discovered that it was indeed the ROM I thought was the 4GB card, but after a moment I realized what must have happened.

Apparently I had mis-identified which card was the 4GB and which was the 8GB when I was saving and flashing the bioses.  So I had ended up swapping bioses on the two cards.  Even when I restored the 4GB card's "original" bios, I had actually been putting a 8GB card's bios on.

Well, as GI Joe said, "Knowing is half the battle".  I flashed the 4GB card with the bios I now knew to be the correct bios and rebooted.  Problem solved for that card, but I was left with one bricked card to deal with.  Since this one wouldn't boot, I wasn't sure how to proceed.

After sitting on that for a while I decided I would drop the 8 GB card into a machine running ubuntu linux 16.04.   This was the operating system I was going to use for the mining rig and I thought maybe, JUST MAYBE linux would boot with the card installed.  After all I wasn't even booting to a GUI, I was just booting to a terminal.

So on to the moment of truth.  Drop the card in, turn on the power, wait several seconds AND SUCCESS!!!!  

After a successful boot, I flashed the card successfully with the linux version of atiflash, and now both cards were mining just as well as they had before my misadventure.  Note: You have to run atiflash as root.  If you aren't running as root, atiflash simply says there are no compatible devices (or something to that effect).  The application should detect that it needs to be run as root and tell you, but instead it just behaves like you have a bricked card.  If you happen to have what you believe to be a bricked card, this isn't very heartening.

So my new rules for bios flashing are: 

  1. Never have more than one card in the machine during a bios flash
  2. Always use linux for flashing and testing flashes.

Note: I found that you don't need to do any driver patching to run a new bios on linux, which I really appreciate.

Friday, December 30, 2016

Announcing MTG DeckView on the Apple AppStore

I have been working on a little iOS application for viewing/modifying Magic the Gathering deck lists for quite a while.  It has been 90% finished for some time, but that last 10% is always the most difficult 10% to push through.  The holiday break gave me the opportunity to really focus in on it and finish it to the point where I feel like it is ready to release to the world.

iOS Support for MTGO .dek files

The main reason I started on this application was that I was annoyed that there didn't appear to be a good way to view the native file format of the the "Magic the Gathering Online" client on iOS (the .dek file).  So what I have provided in "MTG DeckView" is a DeckList Viewer that can open and re-export to the MTGO .dek format on iOS:

I have also provided support for importing and exporting from the simple text decklist format supported by MTGO.

Storing deck lists for offline use

The application stores your decks in a clean and simple tableview interface:

The deck view itself is also fairly simple and uncluttered:

Deck Building

There is some support for deck building as well, you can add new cards from the above deck view (just tap the button in the top right corner to add a new card).  You can also change the number of cards in your deck from the card detail view:

The support for deck building is a bit primitive in my opinion though.  You can only search by card names, and you have to search from the beginning of the card name.  This is something that could be improved upon in the future, but I feel like the initial design of the application is ready for release (i.e. a deck viewer).  If there is enough interest I could definitely seem myself adding much more support for deck building functionality.


I take pride in making sure my apps function properly.  I feel I have crafted a very functional piece of software, but that doesn't mean that there cannot be issues.  If you have discovered a bug, please contact me with details on how to reproduce the issue, and I will make sure to correct it.  Also feel free to contact me with feature requests and things you wish it did that it does not.   I will consider all feature requests as well, but cannot guarantee to include everything someone wishes in a future release.

Contact me at:

Friday, July 24, 2015

A HTTP Proxy using Sinatra

I recently found myself wanting to create a HTTP proxy to intercept certain REST requests and translate them into completely different REST requests while at the same time redirecting them to a new location.

You might not want to do EXACTLY what I wanted to do, but there are many similar tasks that either require a proxy or are greatly simplified by a proxy. In my case I essentially wanted to extend an existing application to a new purpose without actually changing any code in that application.  This is the sort of thing proxies are great at.  You have some existing application and you either don't want to change the code (risk breaking existing functionality/incur large testing overhead), or you don't even have any code for it.  As long as you understand the protocol, you can use a proxy to intercept and modify this data thereby adapting an existing application to new purposes without actually touching the old application/code (with the small required exception of pointing it to your proxy).

 If your code/application is communicating via HTTP here are some ideas of things that you might want to do with a proxy:

1 - Insert or remove a specific HTTP Header
2 - Redirect some (but not all) traffic to a different service
3 - Deny access to specific sites/resources
4 - Replace specific resources with custom resources
5 - Log or inspect traffic as it flows across the proxy (debugging)

If you want to do anything like this, you may find this proxy code useful:

A simple HTTP Proxy using Ruby and Sinatra

I have stripped out all of the code specific to my task, and what is left is a simple proxy that intercepts HTTP request and forwards them on to their new location.  It is ready for you to drop in a few lines of code to accomplish your specific task.

For example:

If you wanted to add a specific header to your requests before passing it along you could do this in one line right after the headers are retrieved from the request and before being passed along:

headers = getHeadersFromRequest

I am sure it isn't perfect, so if you do use it and come up with suggestions for improvement let me know in the comments.

Wednesday, July 1, 2015

Building with GCC 4.6 and Xcode 4

NOTE: This is the resurrection of blog post from 2012, on a blog that is now defunct.   I am moving it over here because I think parts of it can still be somewhat useful although I am sure it is dated, and I am not sure the method for replacing the compiler that was used here will work with modern versions of Xcode


I was recently faced with a problem building some C++ 11 code on OSX. I discovered that for all of the improvements Apple has put into LLVM, it has at least one glaring failure. It does not support lambda expressions. So I was forced to try to get GCC 4.6 building my project on OSX. I ran into two major issues.
  1. GCC does not support fat binaries. (Mac Universal binaries)
  2. XCode is configured to use clang, but I needed to configure it to run a different compiler.
  3. I was able to resolve both of these problems and packaged a small project that you can use to easily overcome these as well!  If you want the Readers Digest version on how to get this working relatively quickly, skip to the end of the article to find the steps.  For the detailed explanation read on…

Building compilers isn’t my favorite pastime, so I looked around and found that the MacPorts project ( makes downloading and installing GCC 4.6 (or a number of other versions) pretty painless.  
After downloading and installing the MacPorts installer I ran the command port install gcc46 +universal.    After a while the install finished, and I thought that maybe this wouldn’t be as bad as I had first imagined.  So I began my build (autoconf based command-line project). 
I discovered very quickly one of my first problems:

gcc-mp-4.6: error: unrecognized option '-arch'
gcc-mp-4.6: error: unrecognized option '-arch'

Ok, I guess I should have realized that this would happen.   I was trying to build “fat” (or universal) binaries using the–arch i386 and –arch x86_64 flags.  This allows you to include support for more than one platform architecture in a single binary. These are Apple extensions, so of course they wouldn’t be in the FSF version of gcc.  Being understandable doesn’t mean that it wasn’t a pain to work around.  I needed fat libraries/binaries to be compiled.  I started thinking about how to get around this problem.  I knew that you could use lipo to combine multiple object files of different architectures, so I thought maybe I could write a compiler wrapper that would honor the –arch flag and compile once with –m64 and once with –m32 and then call lipo to smash them back together and make a fat object file.

First, I searched to see if anyone else had gone to this effort before I spent all the time on it.  After a bit of web searching I discovered that Apple’s version of gcc 4.2 turns out to be exactly what I described in the previous paragraph. (   Not only that, but the source was also freely available (  So if I could compile this and get it to wrap the MacPorts version of gcc, I would be golden.
I spent a bit of time hacking it and discovered that the driver basically calls a different compiler for each architecture in its architecture map.    So I simply made a 2 line script for i386, and x86_64:

/opt/local/bin/gcc-mp-4.6 -m32 $@

/opt/local/bin/gcc-mp-4.6 –m64 $@

I put this in the MacPorts /opt/local/bin and named them in a way so that when I compile driverdriver.c it will call the first for the 32 bit arch and the second for the 64 bit arch.    My new “compiler” is called gcc-mp and I dump it in /opt/local/bin.  The mp stands for MacPorts.
I repeated these steps for g++.  I hoped that the MacPorts version of gcc and g++ would be wrapped sufficiently to honor the –arch flags just like the Apple version of gcc 4.2.  I modified my configure file to use gcc-mp and g++-mp and sure enough it works like a charm.  I am now compiling and it honors the arch flag, and correctly creates fat binaries with both 32bit intel and 64bit intel architectures.

It wouldn't properly compile PPC, ARM, or any other architecture,  but that was just mainly because I didn’t need those architectures.  I assume it would be possible to get the MacPorts gcc 4.6 ARM compiler and do the same steps for ARM architectures.  That will be an exercise for the reader if you need ARM support. :)

So at this point my build was humming along until I hit a portion that calls xcodebuild to build an XCode project file.   Of course, I cannot choose my new compiler in the XCode project file.  It is still trying to use clang.  Well, surely someone had already fixed this problem.  After a few moments of Googling, I found a good blog post that got me started:

Basically, you can add compiler definitions to XCode by modifying/creating some XML definition files.  After a bit of fiddling I discovered that this article must have been based upon modifications to a version of XCode 3, and I was using XCode 4.2.1. These compiler plug-ins now appear to be located at:


If you happen to be using Xcode 4.3+ the location is similar, except the root is in /Applications instead of /Developer/Library:


Additionally, just creating a gcc 4.6 compiler plug-in didn’t work.  That wasn’t too surprising since there was a gcc 4.2 compiler definition there already, but you can't choose it in XCode.   I recalled hearing that Apple wasn’t going to support gcc anymore.  It was llvm-gcc or clang in Lion (10.7).  Since the LLVM-GCC-4.2 plug-in showed up in XCode, I decided that I would make two plug-ins: one for 4.6 and one that pretends to be llvm-gcc based on the 4.6 plug-in.  This actually worked:

 Well, that was exciting!  I even modified the compiler definition so it automatically compiles with the std=c++0x flag. (Why else would you ever go through this pain?)   I failed to include it originally and struggled for a few minutes to figure out why my C++ 11 code still wasn’t compiling.

At this point, I tried to compile my project and I ran into an error where it couldn’t find a .hmap file.  I don’t really know what is going on here, but I discovered that you can turn off the use of header maps by adding a custom build setting to your project “USE_HEADERMAP=NO”.  Sounds like a plan to me.  If anyone has a better suggestion for this, please leave a comment:

After adding this, I was good for about 30 seconds until I get to the portion of my project where some COCOA Objective-C UI stuff was being compiled.

/Developer/SDKs/MacOSX10.7.sdk/System/Library/Frameworks/Foundation.framework/Headers/NSTask.h:75:24: error: expected unqualified-id before '^' token

/Developer/SDKs/MacOSX10.7.sdk/System/Library/Frameworks/Foundation.framework/Headers/NSTask.h:75:24: error: expected ')' before '^' token 

Uh oh.  I know what this is.  This is the Apple "blocks" language extension.  It appears that blocks are used in a bunch of the system header files.  I don't think there is going to be a way to get around this using MacPorts gcc.  The FSF gcc just doesn’t know about blocks.  Fortunately for me, I didn't have anything in the Objective C/C++ code that needed to be compiled with gcc 4.6 so I just had this target compile using clang. 

This works alright when I link to only C libs compiled by GCC 4.6, but when I try to link to a C++ lib built by GCC 4.6 I get a bunch of linker problems.  I was able to restructure the code to remove the dependency on the gcc 4.6 C++ library so that I was only linking with C libraries.  I should probably look into this some more, but If anyone else out there has had this problem and knows how to resolve the C++ linkage issues, please leave a comment.

After all of this, the project finally finished compiling and it works. 

Victory!  Until I have to remember and repeat all of this stuff on a new machine 3 months from now.   So I decided to make a little CMAKE project to create the compiler wrappers and extend XCode.  So now, you too can use MacPorts gcc from within XCode to create fat binaries.

Follow these 6 steps to easily replicate what I have done:

1 – Download CMAKE at: This project requires it.
2 – Install macports for your OSX version from this URL:
3 – Install the UNIVERSAL version of gcc46 using macports (this might take a while):
       /opt/local/bin/port install gcc46 +universal
4 – Download and unzip the macportsgccfixup tarball I created from here: MacPorts GCC Fix-up
5 –Run ./configure (which wraps the cmake configuration command)
6 – Run “make” and then “make install”. 

NOTE:  If you don’t install the universal version of macports GCC you will eventually get some linking errors when it comes to finding the c++ libs for the non-native architecture:

ld: warning: ignoring file /opt/local/lib/gcc46/libstdc++.dylib, file was built for unsupported file format which is not the architecture being linked (i386)
ld: warning: ignoring file /opt/local/lib/gcc46/libgcc_ext.10.5.dylib, missing required architecture i386 in file
ld: warning: ignoring file /opt/local/lib/gcc46/gcc/x86_64-apple-darwin10/4.6.2/libgcc.a, file was built for archive which is not the architecture being linked (i386)


1 - I did all of this using XCode 4.2.1 on 10.7.  I tested it on 10.6 as well, but it probably won’t work on any earlier versions of OSX (I used Xcode 4.2 on 10.6 as well).
2 - As earlier noted it only supports 32 bit and 64 bit intel.  More work would be required to get anything else working, so no ARM and no PPC
3 - It almost certainly won’t work with XCode 4.3 yet, but that is alright since I don’t think macports works with XCode 4.3 yet either.
4-- It MIGHT work with XCode 3.6.x.  It would need to be tested.
5- Any c++ binaries built using GCC 4.6 will have a dependency upon the c++ libs in /opt/local/lib.  If you want to distribute something you have built this way you will either need to first install the macports GCC on the target machine, or distribute all of the c++ libraries in your distribution.

As aFinal note: it should be easy to change version of GCC that all this works with by simple changing the version of GCC in the CMakeLists.txt.  I haven’t tested it yet, but if you want 4.7 you might give it a try.

Monday, June 29, 2015

Mock almost any web service with these 100 lines of ruby code

Many applications are now dependent upon web services to provide much of their functionality.  Testing applications and their interactions with these web services can be pretty hard to automate without a mock service.

I recently needed to write some unit tests for a bit of code that communicated with a web service, and so I started reading up on Sinatra.

It turns out that Sinatra makes it very simple to generate just about any web service.  It is especially trivial to mock data for a given web service end point.  Take this example right out of the Sinatra README:

get '/hello/:name' do
  # matches "GET /hello/foo" and "GET /hello/bar"
  # params['name'] is 'foo' or 'bar'
  "Hello #{params['name']}!"

This seemed awesome, but I realized that I had maybe 50 endpoints that I wanted to mock.  I guess this still isn't too bad, but I really wanted to generalize my ruby code in such a way that it could mock any endpoint without code modification.

So I wrote a Sinatra service that has only 4 end points using splat captures (*), which is basically a wildcard.  I made one route for each of the main HTTP verbs (POST,PUT,GET,DELETE), and had each route match ANY endpoint that matched its verb, meaning:

get '*' do     ...
put '*' do     ...
post '*' do    ...
delete '*' do  ...

I then wrote each route to mirror a "fixtures" directory on the filesystem.

For example, say I had an endpoint to some API for getting user information:

GET /companies/company1/users/jdoe

The get '*' route handles this and browses the file on the filesystem at ./fixtures/companies/company1/users/jdoe.json, and returns the contents of this file in the body of the response:

get '*' do
    path ="#{File.dirname(__FILE__)}/#{FIXTURESDIR}#{params[:splat].first()}")
    pathPlusJson ="#{path}.json")
        response = get_directory_contents_array(path.to_path)
        return response.to_json
    elsif pathPlusJson.exist? and pathPlusJson.file?
        response = JSON.parse(
        return response.to_json
        return create_response( {"error" => "Not Found" }.to_json, 404 )

If I wanted to iterate all users, I could instead request a get on a directory instead of a file:

 GET /companies/company1/users

Now the get '*' route browses to the directory and iterates through all the ".json" files returning an array of JSON encoded user objects in the body.

If I want to add a new user I can simply do a POST:

POST /companies/company1/users/dsmith

with some JSON in the body, and this JSON will be written to ./fixtures/companies/company1/dsmith.json

Delete also works as you might expect, deleting the file in the fixtures directory.

All of this can be done with very little code thanks to the domain specific nature of Sinatra.  The full ruby code can be found in one of my GitHub Gists's:

Now, if you are unit testing a ruby application you can stub any requests to your api endpoint using webmock:

# spec/spec_helper.rb RSpec.configure 
do |config| 
  config.before(:each) do 
    stub_request(:any, / 

If you aren't unit testing a ruby application, you should still be able to fire up this service during the unit test phase of your build, and redirect your API requests to localhost:4567.

Wednesday, June 17, 2015

Adding a right click context menu to finder from an OSX application

Perhaps you have an application that you think would really really benefit from an additional finder menu.  You would really like to show this menu under some circumstance.  Perhaps you want it to show for a specific kind of file, or a when a file is in a specific state.

In the next few paragraphs, I am going to discuss what you can and cannot easily add to the finder menu, and I am providing access to code for a project that does this:

You can also download the pre-built application from the AppStore if you want see how it works without compiling some code:

Ok, now for the bad news:

In reality, you cannot add anything to finder.  You can get things added to the finder context menu, but it is more like you are kindly asking OSX to add the items.  You have a little bit of control, but not a lot, and the control you do have really involves telling OSX to display a service for a specific kind of content.  If you want more control that isn't based upon content type (say the state of the file), it is basically impossible since the cocoa re-write of Finder in 10.6.

If you want to provide a service based upon a specific kind of content (like files, or text), then the good news is that this is very very easy.

Getting content to show up in finder is all about something called NSServices, and you can find the implementation guide here:

So let's use our example code/application to demonstrate how easily this can be done.  The application we are using for an example base64 encodes files and text, as well as decoding base64 encoded text.  This sort of application is so much more useful if you can simple right click on a file and encode it, or perhaps highlight some base64encoded text (perhaps found while browsing the web) and right click to decode it

If this is what we want, all we have to do is modify our application's info.plist telling OSX that we want to provide a service or two for text and/or files.  Let's look first at the service registration for decoding base64encoded text:

If you were looking at just then entry in a text editor, it would look like this:



So we have 4 important items here:

1. NSMenuItem - The "default" key describes the text that will show up in your right click menu.  In this case "Base64Decode"

2. NSMessage - This message that gets sent to you application when this service is called.  This is where you code begins executing when the menu item above is clicked upon.  This must be defined on a class that is registered as the "ServiceProvider" for your application.  In the case of Base64Anywhere, I registered in application did finish launching.

        - (void)applicationDidFinishLaunching:(NSNotification *)aNotification
            ServiceProvider* provider = [[ServiceProvider allocinit];
            [NSApp setServicesProvider:provider]

Then I defined this the DecodeText message receiver on the service provider class as follows:

     - (void) DecodeText: (NSPasteboard*) pasteboard : (NSString*) error;

3. NSRequiredContext - This allows you to restrict in some ways where or when your service is shown.  For example, you could restrict a text service to only show up for text that conforms to a filepath.  Even if you are not specifying restrictions, you still must include the NSRequiredContext key with an empty dictionary.  If you do not, your service will not show up.

4. NSSendTypes - This defines what content type your service shows up for.  In this example we are using the Apple defined UTI for "text".  You can specify a service for any number of Apple defined UTIs.  A list can be found here:

If you want to provide a service for a specific type of file not listed here (maybe because your application created the file type), you can create and register your own UTI.  Once it is defined you can include it in the NSSendTypes, so that a menu item only shows up for your custom file type. Details on how to register a new UTI can be found here:

If your service needs to contextually return something, you may also want to define the NSReturnTypes for your service.  In the base64 encoding example, we might want to highlight some text in Xcode and replace it with the base64 encoded equivalent.  We can do this by specifying the appropriate return type for our service.


The return type is especially useful if we are creating a .service bundle (as opposed to .app bundle).  This service can then actually perform an action retuning some value into an existing UI without needing any UI of its own.

Hopefully this helps you get started adding services to your own application.  Leave comments below!