Quantcast
Channel: Dynamics GP Land : forecaster
Viewing all 240 articles
Browse latest View live

Resolving the Veeam "Backup files are unavailable" message when restoring a VM

$
0
0
By Steve Endow

I'm a huge fan of the Veeam Backup and Replication product.  I've used it for several years now to backup my Hyper-V virtual machines to a Synology NAS, and it has been a huge improvement over the low-tech script based VM backups I was suffering with previously.

One quirk I have noticed with Veeam is that it seems to be very sensitive to any loss of connectivity with the backup infrastructure.  With a prior version, if Veeam was running but my file server was shut down, I would get error notifications indicating that it couldn't access the file server--even though backups were not scheduled to run.  I haven't noticed those messages lately, so I'm not sure if I just turned them off, or if I haven't been paying attention to them.

Since I don't need my servers running 24x7, I have them scheduled to shutdown in the evening, and then automatically turn on in the morning.  But sometimes if I wrap up my day early, I may shut down all of my servers and my desktop at, say, 8pm.  If I shut down my Synology NAS first, and Veeam detects that the file server is not accessible, it may log a warning or error.

Normally, this isn't a big deal, but I found one situation where this results in a subsequent error message.  I recently tried to restore a VM, and after I selected the VM to restore and chose a restore point, I received this error message.

Veeam Error:  Backup files are unavailable


When I first saw this message I was concerned there was a problem, but it didn't make sense because Veeam was obviously able to see the backup files and it even let me choose which restore point I wanted.  So I knew that the backup files were available and were accessible.

I could access the network share on my NAS file server and browse the files without issue.  I was able to click on OK to this error message, complete the restore wizard, and successfully restore my VM.  So clearly the backup files were accessible and there wasn't really an issue.

So why was this error occurring?

I submitted a support case to Veeam and spoke with a support engineer who showed me how to resolve this error.  It seems that whenever Veeam is unable to access the file share used in the Backup Infrastructure setting, it sets a flag or an error state in Veeam to indicate that the backup location is not available.  After this happens, you have to manually tell Veeam to re-scan the backup infrastructure in order to clear the error. Fortunately, this is very simple and easy.

In Veeam, click on the Backup Infrastructure button in the bottom left, then click on the Backup Repositories page.  Right click on the Backup Repository that is giving the error, and select Rescan.


The Rescan will take several seconds to run, and when it is done, the "Backup files are unavailable" message will no longer appear when you perform a restore.  Or at least that worked for me.

Overall, I'm incredibly pleased with Veeam Backup and Replication and would highly recommend it if it's within your budget.


You can also find him on Twitter, YouTube, and Google+









Filtering log entries in ASP.NET ILoggerFactory logs

$
0
0
By Steve Endow

I have two new projects that require web service APIs, so rather than actually use a tried and true tool that I am familiar with to develop these new projects, I am plunging into the dark depths of ASP.NET Core.

If you've played with ASP.NET Core, you may have noticed that Microsoft has decided that everything you have learned previously about developing web apps and web services should be discarded, making all of your prior knowledge and experience worthless.  And if you choose to venture in to new world of ASP.NET Core, you will be rewarded by not knowing how to do anything.   At all.  Awesome, can't wait!

One of those things that you'll likely need to re-learn from scratch is logging.  ASP.NET Core has a native logging framework, so rather than write your own or use a third party logging package, you can now use a built-in logger.  This sounds good, right?

Not so fast.  At this point, I have come to understand that nothing is easy or obvious with ASP.NET Core.

This article provides a basic overview showing how to perform logging in ASP.NET Core.

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging

One thing it doesn't clearly explain is that if you want to have your logs capture Information level entries, it will quickly be filled with hundreds of entries from the ASP.NET Core engine / web server itself.  You will literally be unable to find your application entries in the log file if you log at the Information level.



So the article helpfully points out that ILoggerFactory supports filtering, allowing you to specify that you only want warnings or errors from the Microsoft tools/products, while logging Information or even Debug messages from your application.

You just add this .WithFilter section to your startup.cs Configure method:

loggerFactory .WithFilter(new FilterLoggerSettings { { "Microsoft", LogLevel.Warning }, { "System", LogLevel.Warning }, { "ToDoApi", LogLevel.Debug } })


Cool, that looks easy enough.

Except after I add that to my code, I see the red squigglies of doom:


Visual Studio 2017 is indicating that it doesn't recognize FilterLoggerSettings. At all.



Based on my experience with VS 2017 so far, it seems that it has lost the ability (that existed in VS 2015) to identify missing NuGet packages.  If you already have a NuGet package installed, it can detect that you need to add a using statement to your class, but if you don't have the NuGet package installed, it can't help you.  Hopefully this functionality is added back to VS 2017 in a service pack.

After many Google searches, I finally found this StackOverflow thread, and hidden in one of the post comments, someone helpfully notes that the WithFilter extension requires a separate NuGet package, Microsoft.Extensions.Logging.Filter.  If you didn't know that, you'd spend 15 minutes of frustration, like I did, wondering why the very simple Microsoft code sample doesn't work.

Once you add the Microsoft.Extensions.Logging.Filter NuGet package to your project, Visual Studio will recognize both WithFilter and FilterLoggerSettings.

And here is my log file with an Information and Warning message, but no ASP.NET Core messages.


And several wasted hours later, I am now able to read my log file and actually work on the real project code.

Best of luck with ASP.NET Core.  You'll need it.


You can also find him on Twitter, YouTube, and Google+




Changing Visual Studio keyboard shortcut for Comment and Uncomment

$
0
0
By Steve Endow

A very handy feature in Visual Studio is the Comment / Uncomment editing option.

There are two buttons that allow you to comment or uncomment code with a single click.


While those buttons are handy, they require you to use the mouse, and that can sometimes be tedious if you are having to also make multiple code selections with the mouse.

Visual Studio does have keyboard shortcuts for Comment and Uncomment, but they are the unfortunate double-shortcut combinations:  Ctrl+K, Ctrl+C to comment, and Ctrl+K, Ctrl+U to uncomment.

I find those shortcuts to be pretty annoying, as they require me to use both hands to press those key combinations.  It's not much of a "shortcut".

After several years of this nagging me, I finally bothered to lookup a better alternative.  Fortunately Visual Studio allows you to add your own keyboard shortcuts.  If you click on Tools -> Options, and then select Environment -> Keyboard, you can select a command and assign a new keyboard shortcut.

The one challenge is finding a decent keyboard shortcut that isn't already taken.

I entered the word "comment" and it displayed the relevant commands.  I then selected Edit.CommentSelection, selected Use new shortcut in Text Editor, pressed Alt+C, then clicked Assign.

Now I can comment a selection using the nice and simple Alt+C shortcut.  Big improvement.


I don't Uncomment as much, so for now I haven't assigned a custom shortcut to Edit.Uncomment, but at least I now know it's very easy to do.

Keep on coding...and commenting...



You can also find him on Twitter, YouTube, and Google+





ASP.NET Core and EF Core with Dynamics GP: Trim trailing spaces from char fields

$
0
0
By Steve Endow

Anyone who has written SQL queries, integrated, or otherwise had to deal with Dynamics GP data certainly has warm feelings about the colonial era use of char data type for all string fields.

This has the lovely side effect of returning string values with trailing spaces that you invariably have to deal with in your query, report, application, XML, JSON, etc.

In the world of SQL queries, you can spot a Dynamics GP consultant a mile away by their prolific use of the RTRIM function in SQL queries.  .NET developers will similarly have Trim() statements thoroughly coating their data access code.

But in this bold new age of Microsoft development tools, where everything you have spent years learning and mastering is thrown out the window, those very simple solutions aren't readily available.

I am developing an ASP.NET Core web API for Dynamics GP, and being a sucker for punishment, I'm also using EF Core for data access.  In one sense, EF Core is like magic--you just create some entities, point it to your database, and presto, you've got data.  Zero SQL.  That's great and all if you have a nice, modern, clean, well designed database that might actually use the space age varchar data type.

But when you're dealing with a relic like a Dynamics GP database, EF Core has some shortcomings.  It isn't really designed to speak to a prehistoric database.  Skipping past the obvious hassles, like exposing the cryptic Dynamics GP field names, one thing you'll notice is that it dutifully spits out the char field values with trailing spaces in all of their glory.

When you convert that to JSON, you get this impolite response:

"itemnmbr": "100XLG                         ",
"itemdesc": "Green Phone                                                                                          ",

"itmshnam": "Phone          "


Yes, they're just spaces, and it's JSON--not a report output, so it's not the end of the world.  But in addition to looking like a mess, the spaces are useless, bloat the response, and may have to be trimmed by the consumer to ensure no issues on the other end.

So I just spent a few hours trying to figure out how to deal with this.  Yes, SpaceX is able to land freaking rockets on a floating barge in the middle of the ocean, while I'm having to figure out how to get rid of trailing spaces.  Sadly, I'm not the only one--this is a common issue for many people.

So how can we potentially deal with this?

1. Tell EF Core to trim the trailing spaces.  As far as I can tell, this isn't possible as of June 2017 (v1.1.1).  EF Core apparently doesn't have a mechanism to call a trim function, or any function, at the field level. It looks like even the full EF 6.1+ framework didn't support this, and you had to write your own code to handle it--and that code doesn't appear to work in EF Core as far as I can tell.

2. Tell ASP.NET Core to trim the trailing spaces, somewhere, somehow.  There may be a way to do this in some JSON formatter option, but I couldn't find any clues as to how.  If someone has a clever way to do this, I'm all ears, and I'll buy you a round at the next GP conference.

3. Use the Trim function in your class properties.  Ugh.  No.  This would involve using the old school method of adding backer fields to your DTO class properties and using the Trim function on every field. This is annoying in any situation, but to even propose this with ASP.NET Core and EF Core seems like sacrilege.  And if you have used scaffolding to build out your classes from an existing database, this is just crazy talk.  I'm not going to add hundreds of backer fields to hundreds of string properties and add hundreds of Trim calls.  Nope.

4. Use an extension method or a helper class.  This is what I ended up doing.  This solution may seem somewhat obvious, but in the world of ASP.NET Core and EF Core, this feels like putting wagon wheels on a Tesla.  It's one step up from adding Trim in your classes, but looping through object properties and trimming every field is far from high tech.  Fortunately it was relatively painless, requires very minimal code changes, and is very easy to rip out if a better method comes along.

There are many ways to implement this, but I used the code from this post:

https://stackoverflow.com/questions/7726714/trim-all-string-properties


I created a new static class called TrimString, and I added the static method to the class.

    publicstaticclassTrimStrings
    {
        //https://stackoverflow.com/questions/7726714/trim-all-string-properties
        publicstaticTSelf TrimStringProperties<TSelf>(thisTSelf input)
        {
            var stringProperties = input.GetType().GetProperties()
                .Where(p => p.PropertyType == typeof(string));

            foreach (var stringProperty in stringProperties)
            {
                string currentValue = (string)stringProperty.GetValue(input, null);
                if (currentValue != null)
                    stringProperty.SetValue(input, currentValue.Trim(), null);
            }
            return input;
        }
    }



I then modified my controller to call TrimStringProperties before returning my DTO object.

    var item = _itemRepository.GetItem(Itemnmbr);

    if (item == null)
    {
        return NotFound();
    }

    var itemResult = Mapper.Map<ItemDto>(item);

    itemResult = TrimStrings.TrimStringProperties<ItemDto>(itemResult);


    return Ok(itemResult);


And the new JSON output:

{
  "itemnmbr": "100XLG",
  "itemdesc": "Green Phone",
  "itmshnam": "Phone",
  "itemtype": 1,

  "itmgedsc": "Phone",


Fortunately this works, it's simple, and it's easy.  I guess that's all that I can ask for.



You can also find him on Twitter, YouTube, and Google+




Common GP integration error: Could not load file or assembly 'Microsoft.Dynamics.GP.eConnect, Version=XX.0.0.0

$
0
0
By Steve Endow

If you have a .NET integration for Dynamics GP that uses the eConnect .NET assemblies, this is a fairly common error:

Could not load file or assembly 'Microsoft.Dynamics.GP.eConnect, Version=11.0.0.0

Could not load file or assembly 'Microsoft.Dynamics.GP.eConnect, Version=12.0.0.0

Could not load file or assembly 'Microsoft.Dynamics.GP.eConnect, Version=14.0.0.0


This usually indicates that the integration was compiled with an older (or different) version of the eConnect .NET assemblies.

Why does this happen?

In my experience, there are two situations where you will usually see this.

1. You upgraded Dynamics GP to a new version, but forgot to update your .NET eConnect integrations.  For instance, if you upgraded from GP 2013 to GP 2016, you would see the "Version 12" error message when you run your integration, as the integration is still trying to find the GP 2013 version of eConnect.

2. You are working with an application or product that is available for multiple versions of GP, and the version you have installed doesn't match your GP version


The good news is that this is simple to resolve.  In the first case, the developer just needs to update the Visual Studio project to point to the proper version of the eConnect DLLs.  Updating the .NET project shouldn't take very long--maybe 1-4 hours to update and test, depending on the complexity of the integration.  Or if you're using product, you just need to get the version of the integration that matches your GP version.

If you have a custom .NET integration, the potential bad news is that you, or your developer, or your GP partner, needs to have the .NET source code to update the integration.  Some customers encounter this error when they upgrade to a new version of GP, and realize that the developer who wrote the code left the company 3 years ago and they don't know where the source code might be.  Some customers change GP partners and didn't get a copy of the source code from their prior partner.

If you can't get a copy of the source code, it is theoretically possible to decompile most .NET applications to get some or most of the source code, but in my limited experience as a novice user of such tools, decompilation just doesn't provide a full .NET project that can be easily updated and recompiled.  Or if it does, the code is often barely readable, and would be very difficult to maintain without a rewrite.


You can also find him on Twitter, YouTube, and Google+




Adding or dropping SQL indexes temporarily on production database tables

$
0
0
By Steve Endow

During one of my presentations on Optimizing SQL Scripting at the GP Tech 2017 Conference this week at the Microsoft campus in Fargo, one of the attendees asked an interesting question:

Suppose you need to run a complex query just a few times, and you find that the query would benefit greatly from adding new indexes to one or more tables.  Rather than adding 'permanent' indexes, would it be prudent to temporarily add the indexes so that you could run the query faster, and then remove the indexes when you no longer need them?


I think it is great question. I immediately thought of one likely real world scenario for this, and coincidentally, I shared a Lyft back to the airport with a consultant who described a second scenario where a slightly different index management process was required.


Scenario 1: Imagine that at the end of each financial quarter, dozens of large, complex financial and sales analysis reports are run against dozens of Dynamics GP company databases.  If some reports take a minute to run, and a few indexes can be added to reduce the report run times to a few seconds, that time savings could really add up.  I could definitely see the value of adding indexes to speed up this process.

But is it worth adding permanent indexes to tables to support the quarterly reports?  Or is it better to add the indexes once per quarter, run the reports, and then remove the indexes?

I don't currently know how to assess the actual costs vs. benefits of those situations, but given that Dynamics GP is already drowning in SQL indexes, and given that the indexes may be dropped during a GP upgrade (and it's easy to forget to recreate them), I think that creating the indexes temporarily seems like a reasonable solution for this hypothetical example.

The one concern I expressed about the solution was the potential for the CREATE INDEX process to lock the tables as they were being built.

I did some research, and confirmed my concern that a table will be locked and inaccessible during the CREATE INDEX process.

This is mentioned in the "Performance Considerations" section of this Books On Line page:

https://technet.microsoft.com/en-us/library/ms190197(v=sql.105).aspx



Most Dynamics GP customers use SQL Server Standard Edition, so indexes are created "offline", and the table is locked until the create index operation completes.

SQL Server Enterprise edition does have an "online" indexing option, but from what I have been able to find, even that feature doesn't provide 100% accessibility of the table during the indexing operation, so there may be some challenges in very high volume environments.

If the temporary indexes make sense, my recommendation would be to add the indexes during a maintenance window, such as late at night, and then run the queries the next day (or next few days), and then remove the index when they are no longer needed.


Scenario 2: A Dynamics GP consultant told me a story about a prior job where he had to bulk load millions of records into a table on a regular basis.  The bulk load had a very limited time window, so the import had to be completed as quickly as possible.

In order to speed up the import process, they dropped the indexes on the table, imported the millions of additional records into the table, and then added the indexes back to the table.  I hadn't considered that scenario before, but he explained it worked very well.

I was able to find this Books Online article about it and recommendations on when to drop or not drop indexes for bulk load operations.  It provides recommendations depending on whether the table is empty or not, and how much new data is being imported.

https://technet.microsoft.com/en-us/library/ms177445(v=sql.105).aspx


So I learned a few interesting things myself during my session.  Hope this was helpful!



You can also find him on Twitter, YouTube, and Google+




Tales of Dynamics GP backups and ransomware

$
0
0
By Steve Endow

At the excellent Dynamics GP Tech 2017 Conference this week, I heard a few interesting stories about ransomware at Dynamics GP customers.

One partner told me a very interesting story about ransomware at a customer that encrypted everything, including the customer's Dynamics GP database backups.  The Dynamics GP partner was called in and he assessed the catastrophe.  Nothing was recoverable.

But he noticed something strange.  Dynamics GP was still working.  He logged into the SQL Server, and he saw that the Dynamics GP databases were still intact and were not encrypted.  He speculated that because SQL Server tenaciously locks the MDF and LDF files, the ransomware was apparently unable to encrypt the live database files.

He was able to stop the SQL Service, quickly copy all of the database files, and attach them on a clean SQL Server.  Luckily, that copy process worked and the ransomware was either inactive at that point, or it didn't have time to encrypt the unlocked database files.  In hindsight, I think I would probably first try doing full backups of all of the databases to ensure the MDF and LDF files remained locked, but saving the backup files to a clean location that can't be accessed by the ransomware would probably still be tricky.


Next, during her "Microsoft Azure: Infrastructure, Disaster Recovery, and Backups", Windi Epperson shared some harrowing stories about hurricanes in Oklahoma.


Some of Windi's customers have had entire buildings vaporized by a hurricane, so even the best on-site backup would have been insufficient.  Windi discussed the Azure Backup service, which I didn't even know about, as a flexible and economical way to get all types of backups off site.  She also demonstrated the Dynamics GP backup to Azure feature that she recommends for small customers who don't have the IT staff to handle off site backups.

https://azure.microsoft.com/en-us/services/backup/


I currently have a lot of my data backed up on Backblaze S2 storage through my Synology NAS device, but that is only through a connected sync process, and is not a true archive backup.  I've been looking for a more traditional disconnected off site backup storage service that is reasonably priced, so I'm going to look into Azure backup and see if I can setup a process that can automatically backup what I need.


You can also find him on Twitter, YouTube, and Google+





Stop typing passwords...completely

$
0
0
By Steve Endow

Many, many, many years ago I finally got tired of remembering all of my passwords, and started using an Excel file to track them. After a few years of that, I got tired of insecurely tracking passwords in Excel and started using RoboForm to manage my passwords.  It had a few rough edges way back then, but worked well enough and also worked on my BlackBerry (yup, it was a long time ago).  I now manage a few thousand logins and notes in RoboForm, and needless to say, it's pretty essential in my daily life.

So that's great.  But there are still a few passwords I am having to constantly type.  Every time I sit down at my desk, I have to login to Windows.  I've been doing it for so many years that it's second nature.  I don't even think twice about it--it's pure muscle memory.  Except when I mistype my password or don't realize that Caps Lock is on, and it takes me 3-4 tries.  Grrr.

The second password I am constantly typing is my RoboForm master password.  So when a web site needs a login and I tell RoboForm to handle it, RoboForm will sometimes prompt me to enter my master password if I've just unlocked my desktop or have been away for a few hours.  Again, I've been doing it for so many years that I don't even think about it.

Then came the iPhone fingerprint sensor called TouchID.  It has taken a few years to gain traction, but now I can use my fingerprint to unlock my phone, pay for my groceries, login to my banking apps, and...access the RoboForm iOS app.  It is absolutely fantastic.  Typing my long RoboForm master password on my phone was moderately painful, so being able to use TouchID to unlock RoboForm on my phone was a wonderful improvement.  Once you start using Touch ID, it becomes strange to see a password prompt on the iPhone.

Then, a few years ago, I bought a Surface Pro 4 (which I do not recommend, at all, long story).  While shopping for the Surface Pro 4, I didn't know anything about Windows Hello, and I didn't realize that the Surface Pro 4 had an infrared web cam that could be used for face recognition authentication with Windows Hello.  But when I saw that Microsoft offered a keyboard with an integrated fingerprint reader, I knew I wanted one.  I waited a few months until the keyboard with fingerprint reader was in stock before buying the SP4, and I'm glad I waited.

After a few dozen firmware updates and software fixes made the horrible SP4 minimally usable and allowed the keyboard to actually work, the fingerprint reader on the SP4 keyboard was great.  It was surprisingly fast and easy to use.  It was much faster and more reliable than the Windows Hello face recognition, so I ended up using the fingerprint reader quite a bit.

But I still kept typing in my RoboForm password on my laptop...until one day I was poking around in the RoboForm settings and I accidentally discovered that RoboForm supported fingerprint authentication!  Eureka!  I don't know when the support was added, but I wasn't about to complain.


I enabled the fingerprint support and like magic, RoboForm unlocked with a touch of my finger.  Wow.  This was YUGE.

Having suffered for a few years with the SP4, I finally gave up and bought a real laptop, a Lenovo X1 Carbon 2017, and was thrilled that it had an integrated fingerprint reader as a standard feature.  Having experienced how useful the reader was on the SP4, I was just as happy with it on the Lenovo X1.  And after installing RoboForm on the X1 Carbon, I enabled fingerprint support and was on my way.

So life was then grand in mobile-land.  My phone and laptop had seamless fingerprint authentication to login and authenticate with RoboForm.

Which made using my desktop painful.  I actually...had to... type... my... Windows... password... every... single... time...I sat down.  After being spoiled by my iPhone and my laptop, it felt like a complete anachronism to actually have to TYPE (gasp!) my password!  Barbaric!

I apparently started to get rusty and seemed to regularly mistype my password on my desktop.  I then had several cases where it took me 4 or 5 password attempts before realizing Caps Lock was on.  Ugh.  I felt like I was in the stone ages, where Minority Report style authentication didn't actually exist.  It was...unacceptable.

So I searched for desktop fingerprint readers for Windows.  And...I was underwhelmed.  I found one that looked legit, for about $100, but the reviews were very mixed, citing driver issues and reading that the company had apparently been acquired and that they seem to have disappeared.  After seeing the mixed reviews on other models, I gave up.

But after a few more weeks of password typing punishment, I tried again and figured I would reconsider the small mini fingerprint readers that seem to have been designed for laptops.  A few seemed okay, but again, mixed reviews.

After a few more searches, I found one that seemed legit, and seemed designed for Windows 10 Windows Hello authentication.  (there are probably a few others that work well, but caveat emptor and read the reviews)

https://www.amazon.com/gp/product/B06XG4MHFJ/


It was only $32 on Amazon and seemed to have pretty good reviews, so I gladly bought.  I plugged it in to my Windows 10 desktop, Windows automatically detected it and set it up, and then I added a fingerprint in Windows Hello.  I then enabled fingerprint support in RoboForm.

Based on my tests so far, it works great.  I can now unlock my desktop by very briefly touching the sensor with my finger.  And I no longer have to type my RoboForm master password, which is a huge, huge benefit.  Just like my iPhone and my laptop.  No more passwords.

To make it more accessible and easier to use, I plugged the fingerprint sensor into a USB extension cable and then attached that cable to the back of my keyboard with a little hot glue.  Now, whenever I need to login or enter a password, I just move my hand to the left side of my keyboard and give the sensor a quick touch.



It's quite surprising how fast it is, and it's much, much faster than typing my password.  In fact, I don't even have to press a key on my keyboard.  From the Windows lock screen, I can just touch the sensor and login.

Once I'm in Windows, when I need to unlock RoboForm, it's just a quick touch to the sensor. and it's unlocked.


If you aren't using fingerprint sensors on every device you own, I highly recommend it.  I now use fingerprints on my iPhone, iPad, laptop, and desktop and it's a huge convenience.  You don't realize what a hassle passwords are until you start using your fingerprint to authenticate.

It's taken me several years to use fingerprints on all of my devices, but I'm finally there and it's glorious.


You can also find him on Twitter, YouTube, and Google+






Importing SOP Orders with sales taxes using eConnect

$
0
0
By Steve Endow

I don't remember if I've ever had to import Dynamics GP Sales Orders with sales tax amounts before.  If I have, it's been so long that I've completely forgotten about it.

So let's just say that today was a mini adventure.

My customer is importing "multi-channel" web site orders that are coming from major national retailers and online merchants.  Some of them calculate and charge sales tax, while others do not.  The customer is using Avatax with Dynamics GP, so Avatax is ultimately handling the final sales tax calculation.

For a few reasons that I'm not entirely clear on, the customer wanted to import the sales tax amounts for the web sites that calculated and provided sales tax--even though Avatax would be recalculating the taxes.  And thus began the journey of figuring out the quirky and barely documented process of importing Sales Order header level taxes using eConnect.

We first tried sending in the sales tax amount to the taSopHdrIvcInsert TAXAMNT node.  That resulted in this error:

Error Number = 799  
Stored Procedure= taSopHdrIvcInsert  Error Description = Tax table detail does not equal the tax amount


In the famously ironic process of Googling this error, I found my own thoughts on this error in this forum post.

https://community.dynamics.com/gp/f/32/t/140923


While my response to the post didn't directly address my issue, it gave me some clues.  I used SQL Profiler to trace the activity of my eConnect import and confirmed that the SOP10105 table was not being touched and that taSopLineIvcTaxInsert was not being called.

I checked the eConnect documentation on SOP taxes, but it might as well have been Greek.  I now see that there is one key sentence that is a clue, but without knowing what to look for, it didn't make any sense.

Let me know if you are able to spot the clue.


But it seemed like the taSopLineIvcTaxInsert node may be required even for header level taxes. Which made me concerned that I might have to send it in for each order line item--which would be a hassle.

I updated my eConnect code to add tax lines to my order, leaving out LNITMSEQ because I was only sending in header level taxes, and it resulted in this:

< taSopLineIvcTaxInsert_Items>
< taSopLineIvcTaxInsert>
< SOPTYPE>2< /SOPTYPE>
< SOPNUMBE>WEB0006< /SOPNUMBE>
< CUSTNMBR>CUST0001< /CUSTNMBR>
< SALESAMT>78.75< /SALESAMT>
< TAXDTLID>AVATAX< /TAXDTLID>
< STAXAMNT>5.75< /STAXAMNT>
< /taSopLineIvcTaxInsert>
< /taSopLineIvcTaxInsert_Items>


That did the trick.  The order imported successfully, the sales tax amount came in properly, and the SOP10105 table was populated.

So if you need to import SOP transactions with sales taxes, it appears you have to include taSopLineIvcTaxInsert.

Good times!

Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+

http://www.precipioservices.com


Bug in Dynamics GP eConnect taCreateSOPTrackingInfo: Error 4628

$
0
0
By Steve Endow

I'm working on an import that will insert shipment tracking numbers for Dynamics GP SOP Sales Orders.  Seems pretty straightforward.

When I attempt to import the tracking number for an order, I get this error from eConnect.

Error Number = 4628  
Stored Procedure= taCreateSOPTrackingInfo  
Error Description = The Tracking Number (Tracking_Number) is empty

Node Identifier Parameters: taCreateSOPTrackingInfo
SOPNUMBE = WEB0001
SOPTYPE = 2
Tracking_Number = 1Z12345E0205271688
Related Error Code Parameters for Node : taCreateSOPTrackingInfo
Tracking_Number = 1Z12345E0205271688

< taCreateSOPTrackingInfo>
  < SOPNUMBE>WEB0001< /SOPNUMBE>
  < SOPTYPE>2< /SOPTYPE>
  < Tracking_Number>1Z12345E0205271688< /Tracking_Number>
< /taCreateSOPTrackingInfo>


It seems pretty obvious that something isn't right with this error.  Clearly the tracking number is being supplied.

So off we go to debug eConnect.

When we open the taCreateSOPTrackingInfo stored procedure and search for error 4628, we see this gem:

    IF (@I_vTracking_Number <>'')
        BEGIN
            SELECT  @O_iErrorState =4628;
            EXEC@iStatus = taUpdateString@O_iErrorState,@oErrString,
                @oErrString OUTPUT,@iAddCodeErrState OUTPUT;
        END;



So.  If the tracking number parameter has a value, the stored procedure returns error 4628, saying that the tracking number is empty.  Genius!

I altered the procedure to fix the if statement so that it uses an equal sign, and that eliminated the error, and the tracking numbers imported fine.

    IF (@I_vTracking_Number ='')
        BEGIN
            SELECT  @O_iErrorState =4628;
            EXEC@iStatus = taUpdateString@O_iErrorState,@oErrString,
                @oErrString OUTPUT,@iAddCodeErrState OUTPUT;
        END;



What is baffling is that this bug exists in GP 2016, 2015, and 2013, which is where I stopped looking.  I'm assuming that it has existed prior to 2013.

However, I recently worked with another customer who imports tracking numbers for their SOP Orders, but they did not receive this error.  Why?

Looking at their taSopTrackingNum procedure, I see that it is an internal Microsoft version of the procedure that was customized by MBS professional services for the customer.  The stored procedure was was based on the 2005 version from GP 9, and it does not appear to have the validation code.  Because it is customized, it was just carried over with each GP upgrade, always replacing the buggy updated version that is installed with GP.

So some time between 2005 and 2013, someone monkeyed with the procedure, added error 4628, and didn't bother to test their changes.  And the bug has now existed for over 4 years.

I can't possibly be the only person to have run into this.  Can I?  Does nobody else use this eConnect node?

Anyway, the good news is that it's easy to fix.  But just remember that every time you upgrade GP, that buggy proc is going to get reinstalled, and you'll forget to update the buggy proc, and it will cause your tracking number imports to start failing.

Carry on.


Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+





Multiple hard drive failures on a Synology NAS: Lessons Learned

$
0
0
By Steve Endow

This is a long post, but I think the context and the entire story help paint a picture of how things can fail in unexpected and odd ways, and how storage failures can be more complicated to deal with than you might expect.  I learned several lessons so far, and I'm still in the middle of it, so I may learn more as things unfold.

On Tuesday evening, I received several emails from my backup software telling me that backup jobs had failed.  Some were from Veeam, my absolute favorite backup software, saying that my Hyper-V backups had failed.  Others were from Acronis True Image, saying that my workstation backup had failed.

Hmmm.


Based on the errors, it looks like both backup apps were unable to access my Synology NAS, where their backup files are stored.

That's odd.

When I tried to access the UNC path for my Synology on my Windows desktop, I got an error that the device could not be found.  Strange.

I then opened a web browser to login to the Synology.  But the login page wouldn't load.  I then checked to make sure the Synology was turned on.  Yup, the lights were on.

After several refreshes and a long delay, the login page eventually loaded, but I couldn't login.  I then tried connecting over SSH using Putty.  I was able to connect, but it was VERY slow.  Like 30 seconds to get a login prompt, 30 seconds to respond after entering my username, etc.  I was eventually able to login, so I tried these commands to try and reboot the Synology via the SSH terminal.

After issuing the command for a reboot, the power light started blinking, but the unit didn't shutdown.  Strangely, after issuing the shutdown command, I was able to login to the web interface, but it was very slow and wasn't displaying properly.  I eventually had to hold the power button down for 10 seconds to hard reset the Synology, and then turned it back on.

After it rebooted, it seemed fine.  I was able to browse the shares and access the web interface.  Weird.

As a precaution, I submitted a support case with Synology asking them how I should handle this situation in the future and what might be causing it.  I didn't think it was a big deal.

On Wednesday evening, I got the same error emails from my backup software.  The backups had failed.  Again.  Once again, the Synology was unresponsive, so I went through the same process, and eventually had to hard reset it to login and get it working again.

So at this point, it seemed pretty clear there is a real problem.  But it was late and I was tired, so I left it and would look into it in the morning.

On Thursday morning, the Synology was again unresponsive.  Fortunately, I received a response from Synology support and sent them a debug log that they had requested.  Within 30 minutes I received a reply, informing me that the likely issue was a bad disk.

Apparently the bad disk was causing the Synology to deal with read errors, and that was actually causing the Synology OS kernel to become unstable, or "kernel panic".


This news offered me two surprises.  First, I was surprised to learn that I had a bad disk.  Why hadn't I known that or noticed that?

Second, I was surprised to learn that a bad disk can make the Synology unstable.  I had assumed that a drive failure would be detected and the drive would be taken offline, or some equivalent.  I would not have guessed that a drive could fail in a way that would make the NAS effectively unusable.

After reviewing the logs, I found out why I didn't know I had a bad drive.


The log was filled with hundreds of errors, "Failed to send email".  Apparently the SMTP authentication had stopped working months ago, and I never noticed.  I get so much email that I never noticed the lack of email from the Synology.

The drive apparently started to have problems back in July, but up until this week, the Synology seemed to still work, so I had no reason to suspect a problem.

Synology support also informed me that the unit was running a "parity consistency check" to try and verify the data on all of the drives.  This process normally slows the unit down, and the bad drive makes the process painfully slow.

After a day and a half, the process is only 20% complete, so this is apparently going to take 4-5 more days.


So that's great and all, but if I know I have a bad drive, can't I just replace the drive now and get on with the recovery process?  Unfortunately, no.  Synology support said that I should wait for the parity consistency check to complete before pulling the bad drive, as the process is "making certain you are not suffering data/ volume corruption so you can later repair your volume with no issues."

Lovely.  So waiting for this process to complete is preventing me from replacing the bad drive that is causing the process to run so slowly.  And I'm going to have to wait for nearly a week to replace the drive, all the while hoping that the drive doesn't die completely.

I'm sensing that this process is less than ideal.  It's certainly much messier than what I would have expected from a RAID array drive failure.

But that's not all!  Nosiree!


In addition to informing me that I have a bad drive that is causing the Synology to become unusable, it turns out that I have a second drive that is starting to fail in a different manner.


Notice that Disk 6 has a Warning status?  That's actually the second bad drive.  The first bad drive is Disk 2, which shows a nice happy green "Normal" status.

After reviewing my debug log, Synology support warned me that Disk 6 is accumulating bad sectors.

Sure enough, 61 bad sectors.  Not huge, but a sign that there is a problem and it should probably be replaced.


Lovely.

So why did I not know about this problem?  Well, even if SMTP had been working properly on my Synology, it turns out that the bad sector warnings are not enabled by default on the Synology.  So you can have a disk failing and stacking up bad sectors, but you'd never know it.  So that was yet another thing I learned, and I have now enabled that warning.


So, here's where I'm at.

I've fixed the email settings so that I am now getting email notifications.

I'm 20% into the parity consistency check, and will have to wait 5+ more days for that to finish.

As soon as I learned that I had 2 bad drives on Thursday morning, I ordered two replacement drives.  I paid $50 for overnight express shipment with morning delivery.  Because I wanted to replace the drives right away, right?  But that was before Synology emphasized that I should wait for the parity check to complete.  So those drives are going to sit in the box for a week--unless a drive dies completely in the meantime.

If the parity check does complete successfully, I'll be able to replace Drive 2, which is the one with the serious problems.  I'll then have to wait for the Synology to rebuild the array and populate that drive.

Once that is done, I'll be able to replace Drive 6, and wait for it to rebuild.

Great, all done, right?

Nope.  I'll need to hook up the two bad drives and run the manufacturer diagnostics and hopefully get clear evidence of an issue that allows me to RMA the drives.  Because I will want the extra drives.  If I can't get an RMA, I'll be buying at least 1 new drive.

This experience has made me think differently about NAS units.  My Synology has 8 drive bays, and I have 6 drives in it.  The Synology supports hot spare drives, so I will be using the additional drives to fill the other two bays and have at least one hot spare available, and most likely 2 hot spares.

Previously, I didn't think much of hot spares.  If a drive fails, RAID lets you limp along until you replace the bad drive right?  In concept.  But as I have experienced, a "drive failure" isn't always a nice clean drive death.  And this is the first time I've seen two drives in the same RAID array have issues.

And it's also shown me that when drives have issues, but don't fail outright, they can make the NAS virtually unusable for days.  I had never considered this scenario.  While I'm waiting to fix my main NAS, my local backups won't work.  And this Synology is also backing up its data to Backblaze B2 for my offsite backup.  That backup is also disabled while the parity check runs.  And I then have another on-site backup to a second Synology unit using HyperBackup.  Again, that backup is not working either.  So my second and third level backups are not available until I get my main unit fixed.

Do I redirect my backup software to save to my second Synology?  Will that mess up my backup history and backup chains?  I don't know.  I'll have to see if I can add secondary backup repositories to Veeam and Acronis and perhaps merge them later.

Another change I'll be making is to backup more data to my Backblaze B2 account.  I realized that I was only backing up some of the data from my main Synology to B2.  I'll now be backing up nearly everything to B2.

So this has all been much messier than I would have imagined.  Fortunately it hasn't been catastrophic, at least not yet.  Hopefully I can replace the drives and everything will be fine, but the process has made me realize that it's really difficult to anticipate the complications from storage failures.


You can also find him on Twitter, YouTube, and Google+






eConnect error: The target principal name is incorrect. Cannot generate SSPI context.

$
0
0
By Steve Endow

A customer recently encountered this error with a Dynamics GP eConnect integration:


The target principal name is incorrect. Cannot generate SSPI context.

Just before this error was reported, a new version of a custom Dynamics GP AddIn had been deployed, so I got the support call, as the partner and customer thought the error was released to the new AddIn.

But this error is related to the eConnect user authentication with SQL Server, so deploying a new DLL shouldn't have affected that authentication.

I recommended that the customer's IT team check the status of the eConnect windows service on the terminal server and try restarting it.  The eConnect service was running, but when they restarted the service, they received a login error.

It seems that some other process on the client's network was attempting to use the Active Directory account assigned to the eConnect service on the terminal server.  That other process, whatever it is, apparently has an invalid or old password for the domain account.  So it was failing to login and locking the Active Directory account.

Once the account was locked, the eConnect service on the terminal server would begin receiving the SSPI context errors, as its authentication with SQL Server would fail once the account was locked.

The IT team had previously tried to reset the eConnect account password, but it would just get locked out again by the mystery app or process that was still trying to use the same domain account.  So I recommended that they create a new dedicated domain account for use by the eConnect windows service on the terminal server.

Once they setup the new domain account and updated the eConnect windows service to use the new account, the problem went away.

However, this morning the error seemed to occur again, but restarting the eConnect service appears to have resolved it.  Given this odd recurrence, there may be some other cause or details that may be contributing to the problem.

Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+




The 10th and 11th ways you can lose your SQL data...

$
0
0
By Steve Endow

Brent Ozar has an excellent post where he shares 9 stories about how customers lost some or all of their SQL Server data.

https://www.brentozar.com/archive/2015/02/9-ways-to-lose-your-data/


What's great about his stories is that as I read each one, I thought "Yep, I can totally see that happening."  A simple oversight, a small mistake, one person making a change without realizing it affected other systems, or simply forgetting to change back a single setting in SQL Server.  The one about invalid SQL SMTP settings preventing error emails from going out reminded me of my recent Synology drive failures, as I also had invalid SMTP settings and hadn't received the hundreds of error emails telling me I had a problem--so I am certain that is a common symptom.

While stories about hurricanes, floods, tornadoes, or fires may provide great drama for discussion about disaster recovery, I suspect that there are far more disasters that are caused by a few clicks of a mouse, followed by "Ooops." (or "OH MY GOD WHAT HAVE I DONE???")

I have two data loss stories of my own to add to the SQL data loss lore.


Pulling the Wrong Drive

Many years ago, I was a "business systems consultant" for a Big 6 (at the time) consulting firm and somehow ended up helping a customer with their Solomon IV implementation after their sole IT employee quit.  I knew Solomon IV, knew VB, knew SQL, and knew hardware, so I was juggling everything and helping them finish their implementation.

Their Hewlett Packard server that hosted the Solomon IV databases was having some issues with its RAID array.  The server had mirrored drives that hosted the database files, and occasionally that mirror would 'break' for no good reason.  Windows would mark one drive as inactive, and the server would run on one of the drives until we removed the inactivated drive, reinserted it, and repaired the array.  This had happened once or twice before, and I was on site at the customer when it happened again.  I checked Windows, checked the array, confirmed the mirror had broken.  I then pulled the drive, reinserted the drive, and then started the array rebuild.  No problem.

Shortly after that, a user noticed that a transaction they entered that morning was no longer available in Solomon.  Then another user.  Then another.  We eventually discovered that all of the transactions and data that had been entered that day were gone.  What happened?

After pondering for a while, I realized what I had done.  When the RAID mirror broke, Windows would say that one drive had been inactivated, but it wasn't always clear which drive had been inactivated.  You had to poke around to figure out if it was the drive on the left or the drive on the right--I don't remember the process, and it might have even been as high tech as watching to see which blinky light on one of the drives wasn't blinking.

I had either mis-read the drive info or not looked carefully enough, and I had pulled out the wrong drive.  The active drive.  The one that was working and had been saving the transactions and data that day.  After I reinserted the drive, I then chose the 'bad' drive, the one that hadn't been active at all that day, marked it as the primary, and then rebuilt the mirror with the old data from that drive.  Thereby losing the data that had been entered that day.

This was pre-SQL Server, so we didn't have transaction log backups, so even if we had a full back up from the prior evening, it wouldn't have helped, as it was only that day's data that was lost.  Fortunately, I think it was only mid-day, so the users only lost the data from that morning and were able to reconstruct the transactions from paper, email, and memory.

Ever since I made that mistake, I am extremely paranoid about which physical drive is mapped to RAID arrays or Windows drive letters.  If you've built a PC or server in the last several years, you may know that Windows will assign drive letters semi-randomly to SATA drives.  And when I had two bad drives in my Synology, I double and triple checked that the drive numbers provided by the Synology did in fact map to the physical drives in the unit, from left to right.

I'm hoping that I never pull the wrong drive again.


Test vs. Production

In Brent's blog post, he shared a story about someone logging into the wrong server--they thought they had logged into a test environment, but were actually dropping databases in production.

I have a similar story, but it was much more subtle, and fortunately it had a happier ending.

I was testing a Dynamics GP Accounts Payable integration script.  I must have been testing importing AP invoices, and I had a script to delete all AP transactions from the test database and reload sample data.  So I'm running my scripts and doing my integration testing, and a user calls me to tell me that they can't find an AP transaction.  We then start looking, and the user tells me that transactions are disappearing.  What?

As we were talking, all of the posted AP transactions disappeared.  All AP history was gone.

Well, that's weird, I thought.

And then it hit me.  My script.  That deletes AP transactions.  That I ran on the Test database.

But how?

Somehow, I apparently ran that script against the production company database.  I was probably flipping between windows in SQL Management Studio and ended up with the wrong database selected in the UI.  And the customer had so much AP data that it took several minutes to delete it all, as I was talking to the user, and as we watched the data disappear.

You know that gut wrenching feeling of terror when your stomach feels like it's dropped out of your body?  Followed by sweat beading on your brow?  That's pretty much how I felt once I guessed that I had probably accidentally run my Test Delete script on the production database.  Terror.

In a mad scramble that amazes me to this day, I somehow kept my sanity, figured out what happened, and came up with an insane plan to restore the AP data.  Fortunately, the customer had good SQL backups and had SQL transaction logs.  For some reason, I didn't consider a full database restore--I don't recall why--perhaps it was because it would require all users to stop their work and we would have lost some sales data.  So I instead came up with the crazy idea of reading the activity in the SQL log files.  Like I said, insane.

So I found an application called SQL Log Rescue by RedGate Software that allowed me to view the raw activity in SQL Server log files.  I was able to open the latest log file, read all of the activity, see my fateful script that deleted all of the data.  I was also able to view the full data of the records that were deleted and generate SQL scripts that would re-insert the deleted data.  Miraculously, that crazy plan worked, and SQL Log Rescue saved me.  I was able to insert all of the data back into the Accounts Payables tables, and then restart my heart.

Thinking back on it, I suspect that the more proper approach would have been do to a SQL transaction log backup and then perform a proper point in time recovery of the entire database.  Or I could have restored to a new database and then copied the data from the restore into production.  But as Brent's stories also demonstrate, we don't always think clearly when working through a problem.


So when you're planning your backup routines and disaster recovery scenarios, review the stores that Brent shares and see if your backup plans would handle each of them.  And then revisit them again occasionally to make sure the backups are working and you are still able to handle those scenarios.


Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.


You can also find him on Twitter, YouTube, and Google+



Free SFTP file transfer and data export tool for Dynamics GP file-based integrations

$
0
0
By Steve Endow

A somewhat common requirement for file-based integrations between Dynamics GP and external services or SaaS solutions involves uploading or downloading files from an SFTP server (SSH File Transfer, completely different than the similarly named FTP or FTPS).  SFTP has some technical quirks, so it is often a hassle for customers to automate SFTP file transfers as part of their Dynamics GP integrations.

Some of those integrations also involve exporting data from Dynamics GP to a CSV file and uploading that data to an SFTP server.

To handle this task, I have developed an application that can export data from GP, save it to a CSV file, and upload it to an SFTP server.  It can also download files from an SFTP server.  The tool is fully automated, can be scheduled using Windows Task Scheduler, and it includes file archiving, logging, and email notification in case of errors.

If you use Blackline, Coupa, IQ BackOffice, or any other provider or service that requires upload or download of files with an SFTP server, this tool may be helpful.  It can be used in place of WinSCP or similar tools that require command line scripting.

I am offering this tool for free to the Dynamics GP community.  It can be downloaded from my web site at:

https://precipioservices.com/sftp/

The download includes a user guide and sample configuration file.  There are quite a few configuration settings, so please make sure to review the documentation to understand how the settings are used.

If you end up using the Precipio SFTP tool, I would be love to hear about which system or service you are using it with and how it ends up working for you.

If you have questions or encounter issues, you can contact me through my web site at:

https://precipioservices.com/contact-us/



You can also find him on Twitter, YouTube, and Google+









Back up your Dynamics GP SQL Server databases directly to Azure Storage in minutes!

$
0
0
By Steve Endow

At the GP Tech Conference 2017 in lovely Fargo, ND, Windi Epperson from Advanced Integrators had a great session about Disaster Recovery. One topic she discussed was the ability to use the Dynamics GP Back Up Company feature to save SQL backups directly to Azure.


I think doing SQL backups to Azure is a great idea. There are countless tales of SQL backups not being done properly or being lost or not being retained, and having an option to send an occasional SQL backup to Azure is great.

But this option is a manual process from the Dynamics GP client application, it is not scheduled, and it does not use the "Copy-only backup" option, so the backups will be part of the SQL backup chain if the customer also has a scheduled SQL backup job.  So as Windi explained, it may be a great option for very small customers who can reliably complete the task manually on a regular basis.

But how about setting up a backup job in SQL Server that will occasionally send a backup to Azure?

It turns out that the process is remarkably easy and takes just a few minutes to setup and run your first backup to Azure Storage.

NOTE: From what I can tell, SQL backups to Azure are supported in SQL 2012 SP1 CU2 or later.  And it appears that the backup command syntax may be slightly different for SQL 2012 and 2014, versus a newer syntax for SQL 2016.


The hardest part is setting up your Azure account and creating the appropriate Azure Storage account.  It took me a few tries to find the correct settings.

First, you have to have an Azure account, which I won't cover here, but it should be a pretty simple process.  Here is the sign up page to get started:  https://azure.microsoft.com/en-us/free/

Once you have your Azure account setup and have logged in to the Azure Portal (https://portal.azure.com), click on the "More Services" option at the bottom of the services list on the left.  In the search box, type "storage" and a few options should be displayed.

I chose the newer "Storage Accounts" option (not "classic").  To pin this to your services list, click the star to the right.




When the Storage Accounts page is displayed, click on the New button at the top.


The Create Storage Account page will be displayed.



To create a new storage account, give the storage account a unique name and choose General Purpose.  I found that with SQL Server 2014, the Blob storage account type does not work, but it may work with SQL Server 2016.

Choose your Replication type.  The more comprehensive replication types will cost more, but I don't currently know what the prices are for each.  Here is an article describing the storage types.  Geo Redundant Storage is recommended if you want to recover your files in case a single data center is destroyed or inaccessible.

Choose an existing resource group or create a new one, and then choose a location for your storage account.  Once you have specified all of the settings, click on Create, and you can check the Pin to dashboard box to make it easy to access your account from your Azure dashboard.

It will take a few seconds for the storage account to be created.


Once it is setup, it will show as Available.


Click on the storage account to view the configuration.  On the left side, click on the Containers item under "Blob Service", then click on the + Container button to create a new storage container.


Give the new container a name and choose Private access level.


Once the container is created, click on "Access keys" on the left menu for the storage account.


Copy these keys and store them in a safe place.

Next, using your storage account name and one of your keys, create a "credential" on your local SQL Server.

CREATE CREDENTIAL azuresqlbackup
WITH IDENTITY= 'mygpbackups'
, SECRET = 'yourazurestorageaccountkeyhere=='

You can then use a simple backup script to perform the backup, referencing the SQL credential that you created.

This script uses the COPY_ONLY option so that it does not disrupt the backup chain of local database backup jobs, and it also uses the COMPRESSION option, which dramatically reduces the backup file size and significantly improves the backup performance.

BACKUP DATABASE[DYNAMICS] 
TO URL = 'https://mygpbackups.blob.core.windows.net/gpsqlbackups/2017-09-28 DYNAMICS.bak' 
/* URL includes the endpoint for the BLOB service, followed by the container name, and the name of the backup file*/ 
WITH CREDENTIAL = 'azuresqlbackup', COPY_ONLY, COMPRESSION;
/* name of the credential you created in the previous step */ 
GO

BACKUP DATABASE[TWO]
TO URL = 'https://mygpbackups.blob.core.windows.net/gpsqlbackups/2017-09-28 TWO.bak' 
/* URL includes the endpoint for the BLOB service, followed by the container name, and the name of the backup file*/ 
WITH CREDENTIAL = 'azuresqlbackup', COPY_ONLY
, COMPRESSION;    
/* name of the credential you created in the previous step */ 
GO  

Try running one of these backup scripts and if all goes well, SQL will send the bak file up to Azure.

I tested the backup on my GP 2016 virtual machine over my 100 mbit internet connection, and the performance was remarkable.

A 250MB DYNAMICS database was backed up in 13.6 seconds, and a 1.3GB TWO database was backed up in 96 seconds.


But that was without compression!  If I enable compression, the size of the backup file is greatly reduced, and the backup time drops dramatically.  From 18 seconds to 4.5 seconds for DYNAMICS, and from 96 seconds to 14 seconds for TWO!!!


My internet upload speed was maxed out at 120 megabits.


Based on my tests, I think using Azure Storage for SQL Server backups is a great option.  It doesn't have to be your primary daily backup repository, but I would definitely recommend that customers send a few weekly backups to Azure for safe keeping.



You can also find him on Twitter, YouTube, and Google+














Why I don't accept foreign checks (aka North American banking is a mess)

$
0
0
By Steve Endow

Several years ago, I received a paper check, in the mail, from a Dynamics partner in Canada.  The partner was paying my US dollar invoice, and thought they were doing me a favor by drafting the check from their US dollar bank account at their Canadian bank.

Send a check in US dollars to pay a USD invoice--makes sense, right?

Nosiree. 

I attempted to deposit the check at my local Bank of America branch using the ATM.  The ATM would not accept the check.  So I went inside the bank, stood in line, and then told the teller I wanted to deposit the check.  The teller looked at the check, and confusion ensued.

Eventually a manager came over and explained to me, with full confidence, and in no uncertain terms, that they were unable to accept the check.  He explained that the problem was not that the check was from a Canadian bank.  He said that the problem was that the Canadian check was issued in US Dollars.  He claimed that because the country of origin did not match the check currency, the branch could not accept the check.  That's the policy. (no it isn't)

So...how can I deposit the check?

The manager handed me a special envelope and a triplicate carbon copy form.  He said I needed to fill out the form and mail it with the check to a super special obscure department at Bank of America called "Foreign Clean Collections"--whatever that means.  Once the check is received by that department, it will review the check and coordinate with the foreign bank to get the funds transferred.  This process will take 6-8 WEEKS. 

You're kidding me, right?  Nope.

So, being curious about this banking train wreck, I gave it a try.  I filled out the form and mailed the USD $1,000 check off to the super special department.

A few weeks later, a deposit shows up in my account for $800.  Yup, $200 less than the check.  In addition to having to wait several weeks for the deposit, I was charged $200 in bank fees!

After that nightmare, I stopped accepting any foreign checks.  I put a big red note on my invoice that says that I only accept credit cards and wire transfers from non-US customers. And guess what: That process has been working just fine for years.

This week, a Canadian partner didn't read my invoice, and didn't read my email with the invoice, and they mailed me a paper check.  The check is from their Canadian bank, issued in US Dollars.  Great.

So I contacted a colleague who regularly receives Canadian checks, and she said that she routinely deposits Canadian checks issued in USD at her local BofA branch without any issues.  Huh.

But having paid my $200 entrance fee to the Bank of America Foreign Clean Collections Club, I wasn't about to just deposit this new check, wait several weeks, and see how much I get charged.

So I did the obvious thing:  I called my local Bank of America branch.

First customer service rep:  "Sorry, I don't deal with those things. Let me transfer you to our back office."  Apparently the back office doesn't have voicemail and is out to lunch at 9am, as the phone rang for 3 minutes with no answer.  I tried calling the branch back, but this time nobody answered and I got a voice response system.  So the local bank branches are useless when inquiring about these things.

So I then called the main BofA customer service 800 number.  I spoke with someone who tried very hard to help, but she was unable to find any information and her computer and phone were unable to contact the department who might be able to help.  So she gave me the phone number to the Bank of America Foreign Exchange Call Center.

I then directly called the illustrious Foreign Exchange Call Center and spoke with someone who, for the first time, sounded like he understood the mysterious process of depositing foreign checks with Bank of America.

"Can I deposit this Canadian check drafted in US Dollars at my local California branch?", I asked

"Every check is reviewed on a case by case basis.", he replied

What?  What does that even mean?

"Every check is reviewed on a case by case basis.", he replied

So you have no consistent policy about depositing foreign checks?

"Yes, we have a very consistent policy that I just explained to you.  Every check is reviewed on a case by case basis.", he replied


After speaking with him for several minutes and apparently annoying him, here is my understanding of the official Bank of America policy / procedure for foreign checks.

1. Acceptance of a foreign check is completely up to the discretion of the BofA branch, and the inconsistent and incorrect training that a teller or branch manager may have received.  The branch can simply say they don't accept foreign checks. Or they conjure up an excuse as to why they can't accept the check, like "the country of origin does not match the check currency".

2. If the branch is willing to try to accept the check, they can scan the check in their "system".  This "system" then determines if Bank of America is willing to accept the check at that branch.  Apparently this involves super secret algorithms about my "relationship" with the bank, the physical check, the bank that issued the check, the country of origin, the currency, the amount, etc. 

3. If the "system" determines that the branch can accept the specific check, apparently the check will be deposited in a fairly normal manner.

4. If the "system" determines that the branch cannot accept the check, then the magical process with the Foreign Clean Collections department kicks in, and you get the multi-part form, special envelope, a 6-8 WEEK processing time, and hundreds of dollars in fees that you will not be able to determine in advance. 

5. The representative claimed that Bank of America only charges a flat $40 for the Foreign Clean Collections process, but that the issuing bank can charge their own fees for having to process the foreign check.  In my case, I was charged around USD $150 by the issuing Canadian bank just for the honor of cashing their USD check.  There is realistically no way for you to know how much the foreign bank will charge in advance.

6. I asked the representative how I was supposed to accept payments given the uncertainty and fees involved in this process.  He told me that they recommend wire transfers for foreign payments, and basically told me not to accept foreign checks.

What a shocking conclusion.

Naturally, I have received several responses from people saying that they accept foreign checks all the time at their bank and never have an issue.  Good for you, I say, enjoy the 1900s!  The Pony Express loves you!

I rarely receive such checks, don't want to have to drive to the bank to deposit them, and don't want to deal with clueless bank employees and the nightmare game-of-chance process outlined above.

Checks are a vestigial organ of banking and are a testament to the absurdly anachronistic North American banking system.  Talk to someone from any country with a modern banking system and ask them how many checks they issue.  "Checks?  What?" will be the response.  People from Singapore and Australia literally laugh in disbelief when I mention that the US still uses paper checks.

Wire transfers have been well established since the late 1800s and now provide same day international funds transfers, usually for a reasonable fixed fee.  Credit cards are a defacto payment method for a massive volume of transactions for many countries, and have benefits like fraud protection and points, and the merchant pays the fees for those transactions--which I am happy to do. 

And services like the excellent TransferWise provide very low cost EFT funds transfers to dozens of countries with an excellent exchange rate. 

The only reason I have to explain why North American consumers and businesses seem to cling to checks is because our backwards banking system does not (yet) charge fees to shuffle around millions of pieces of paper with ink on them, pay the postage to mail them, scan those papers into digital images, and then perform an electronic funds transfer behind the scenes.  But they do charge a fee if customers initiate a payment electronically through EFT / ACH or a wire transfer and no paper is involved.  It's crazy.

So, after wasting a few more hours researching this topic, I now have a clear decree, straight from the heart of Bank of America, and will continue to accept only credit card and wire transfer payments from non-US customers.  If it's good enough for the rest of the world, it's good enough for me.



You can also find him on Twitter, YouTube, and Google+


Beware of UTC time zone data when importing data into Dynamics GP!

$
0
0
By Steve Endow

Prior to this year, I rarely had to deal with time zones when developing integrations for Dynamics GP.

The customer was typically using GP in a US time zone, the SQL Server was on premise in that time zone, and all of their data usually related to that same time zone.  Nice and simple.

Dynamics GP then introduced the DEX_ROW_TS field to several tables, and I would regularly forget that field used a UTC timestamp.  That was relatively minor and easy to work around.

But with the increasing popularity of Software As A Service (SaaS) platforms, I'm seeing more and more data that includes UTC timestamps.  I didn't think too much about this until today, when I found an issue with how a SaaS platform provided transaction dates in their export files.

Here is a sample data from a file that contains AP Invoices:

    2017-09-05T14:26:05Z

This is a typical date time value, provided in what I generically call "Zulu time" format.  Apparently this format is defined in ISO 8601.

The format includes date and time, separated by the letter T, with a Z at the end, indicating that the time is based on the UTC time zone.

So why do we care?

Until today, I didn't think much of it, as my C# .NET code converts the full date time string to a DateTime value based on the local time zone, something like this:

string docDate = header["invoice-date"].ToString().Trim();
DateTime invoiceDate;
success = DateTime.TryParse(docDate, out invoiceDate);
if (!success)
    {
        Log.Write("Failed to parse date for invoice " + docNumber + ": " + docDate, true);
    }

This seemed to work fine.

But after a few weeks of using this integration, the customer noticed that a few invoices appeared to have the incorrect date.  So an 8/1/2017 invoice would be dated 7/31/2017.  Weird.

Looking at the data this morning, I noticed this in the SaaS data file for the Invoice Date field:

2017-08-25T06:00:00Z
2017-08-21T06:00:00Z
2017-08-23T06:00:00Z


Do you see the problem?

The SaaS vendor is taking the invoice date that the user in Colorado enters, and is simply appending "T06:00:00Z" to the end of all of the invoice dates.

Why is that a problem?

Well, when a user in Colorado enters an invoice dated 8/25/2017, they want the invoice date to be 8/25/2017 (UTC-7 time zone).  When the SaaS vendor adds an arbitrary time stamp of 6am UTC time, my GP integration will dutifully convert that date into 8/24/2017 5pm Colorado time.

For invoices dated 8/25, that may not matter too much, but if the invoice is dated 9/1/2017, the date will get converted to 8/31/2017 and post to the wrong fiscal period.

To make things even more fun, I found that the SaaS vendor is also storing other dates in local time.

2017-09-05T08:24:36-07:00
2017-09-05T08:26:22-07:00
2017-09-05T08:28:13-07:00


So I have to be careful about which dates I convert from UTC to local time, and which ones I truncate the time to just get the date, and which ones are local time.  And I may contact the vendor to have them fix the issue with the invoice dates--there is no good reason why they should be appending "T06:00:00Z" to dates.

Expect to see a lot more of this date format and related date issues as more customers adopt cloud-based solutions and services. 



You can also find him on Twitter, YouTube, and Google+




Free Precipio SFTP file transfer and data export tool - New Version 1.41 released

$
0
0
By Steve Endow

I have released a new version of my free SFTP file transfer and data export tool for Dynamics GP.

The new version 1.41 can be downloaded from my web site:

           http://precipioservices.com/sftp/


Version 1.41 includes the following enhancements:


  • Add support for optional SQLTimeout setting in config file to increase SQL command timeout
  • Set default SQLTimeout to 60 seconds if setting is not present in config file
  • Increase SFTP Connection Timeout from 5 seconds to 30 seconds, and Idle Timeout from 10 seconds to 30 seconds



The SQL Timeout setting allows for longer running queries, or queries that result in larger export files. 

The SFTP Connection Timeout was increased to accommodate some SFTP servers that might not complete the connection process in 5 seconds.


If you use the SFTP application, please let me know! I'd love to hear how you are using it and if it is working well for you.



You can also find him on Twitter, YouTube, and Google+








Dynamics GP BlackLine Integration Upload
Dynamics GP Coupa Integration Upload
Dynamics GP IQ BackOffice Integration Upload
Dynamics GP SFTP Integration Upload
Dynamics GP SFTP File Transfer Upload

Building a Dynamics GP test environment on a B-series Azure Virtual Machine: Not so fast!

$
0
0
By Steve Endow

With the recent release of Dynamics GP 2018, I wanted to setup a new virtual machine that I could use for testing and development. 

I currently run my own Hyper-V server, which serves up 20 different virtual machines, and has been very low cost and is extremely fast.  I would be happy to outsource my VMs to the "cloud", but having looked into the cost several times over the last few years, it just isn't economical for me.  I previously estimated it would cost me over $300 a month to host just a few VMs.  That cost, on top of having to severely limit the number of VMs I can run just didn't make sense for hosting my internal development VMs.

But recently fellow MVP Beat Bucher told me about a new Azure VM that was lower cost:  the B-Series "burstable" VMs.

https://azure.microsoft.com/en-us/blog/introducing-b-series-our-new-burstable-vm-size/

Beat explained that he was able to run two of the B4ms machines continuously for a cost of roughly $150 per month.  I was intrigued. 

After reviewing the different sizes, I setup a new B2ms virtual machine on Azure, running Windows Server.  The provisioning process was very simple, easy, and fast, and I had a VM a few minutes later.

I then downloaded and installed SQL Server and SQL Management Studio.  There were a few subtle hints that something wasn't quite right, but at the time the machine seemed great.

I then downloaded the 1.6 GB Dynamics GP 2018 DVD as a zip file.  Like when I downloaded SQL Server, I noticed that when I downloaded the GP 2018 zip file, the Chrome browser didn't show the download status.  When I opened Windows File Explorer, nothing showed up in the download directory during the download or after the downloaded appeared to complete.  It took quite a while for Windows File Explorer to show the downloaded file.

I noticed Windows File Explorer seemed unresponsive as well.  It just didn't feel right, but I hadn't yet pieced together the clues.

I then tried to unzip the GP 2018 file.  That's when it was clear something was wrong.


This status window appeared, showing that it would take over 30 minutes to extract the 1.6 zip file.  What??  1.36MB/s? 

I then did dozens of other tests, simply copying large (1GB+) files on the C: drive and between the C: and D: temporary drive.  The performance was abysmal. 


After several tests, I noticed that on average, the file copies were clearly being throttled around 21-22MB/s.


What in the world was going on?

The B-Series VMs are supposed to have "Premium SSD" storage, and 21MB/s is definitely not SSD performance.

I submitted an Azure support case and after several days, received a response.  The support rep admitted that because the B-Series VMs were relatively new, he didn't have much experience with them and would need me to do some tests to narrow down the cause.  No problem.

He first had me "redeploy" the Azure VM, which apparently pushes the VM to a new "node" or physical host machine.  I completed that process and tested again, but got the same results: file copies were still painfully slow.

He then had me install the Performance Insights plugin on the VM, which apparently runs some automated performance tests and automatically submits the results to the support case (a very cool feature).  I completed that process and a few days later, he emailed me with an explanation for the slow disk performance I was seeing.

This is the critical information that I overlooked when selecting the B-Series VM:


Notice that the B2ms size has a maximum disk speed of 22.5 MB/s.  That is the maximum

The B4ms offers 35MB/s and the B8ms tops out at 50MB/s.  50 sounds a lot better than 22.5, but even 50MB/s is horrifically slow compared to any competent modern storage.

Even if you add an additional high performance Premium SSD, such as a 1023GB drive with 5,000 IOPS and 200MB/s throughput (which is VERY expensive), if it is attached to a B2ms VM, you will still be limited to 22.5 MB/s. 



For comparison, my local Hyper-V server can copy files at 100MB/s from my NAS, and the limiting factor is the gigabit network connection between the NAS and the server, not my NAS or the SSDs in my server.

Local file copies on the SSDs on my Hyper-V server can be as high as 1GB/s!! It's so fast that I had a very hard time getting a screen shot while copying the 1.6GB Dynamics GP 2018 zip file.


If you are used to even half-decent disk performance on a server, can you live with 22.5 or 35 MB/s on an Azure B-Series VM?

And am I willing to spend an extra hour or two setting up an Azure B-Series VM, due to its brutally slow disk IO, for a Dynamics GP 2018 test environment?  Am I confident that once I set it up and don't have to do many large file copies, that the disk performance will be sufficient for my needs?

Can SQL Server actually run well enough on a disk throttled at 22.5MB/s?  Now that I see the disk specs, I am pretty sure that the B-Series was never intended to ever run SQL Server.

And I'm not willing to waste my time to find out.  Those disk speeds are so slow that I am not confident that the B-Series VM will meet my needs even for a test + development server.  Even if I used the B4ms, that's roughly $75 a month for a potentially painfully slow VM. 

So, I have ruled out the B-Series Azure VMs for now, and would have to look at the "standard" VMs, which would likely still cost $150-$300 per month for 1-2 non-production VMs.

Since I have a very fast Hyper-V server in my office that can easily host 20 VMs with a marginal cost of $0 per month per VM, it seems that I will be sticking with an on premises server for at least a few more years.



You can also find him on Twitter, YouTube, and Google+





Accepting help from experts and offering help as an expert

$
0
0
By Steve Endow

I've recently had two situations where someone asked for help with Dynamics GP, and when I provided guidance, the requester indicated that my suggestions were not relevant.  Without considering my suggestions or trying them, the requester immediately ruled them out.



They were simple suggestions, such as "please try making this change and perform the process again to see if that resolves the error", or "have you traced your source data to verify that it isn't the cause of the incorrect transaction that was imported?".

"That can't be the cause." was one response.

"My custom stored procedure that imports data into GP verifies everything, so I know it worked properly." was another response.

Another common response I receive when troubleshooting issues is, "We've already checked that and it's not the cause of the problem."



I don't consider myself an "expert" at anything, but there are some topics where I've done enough work to have a certain level of knowledge, intuition, and skills such that I'm generally able to narrow down causes to problems, and typically know some good places to start looking for causes.  I have enough successes solving problems in certain areas that it seems like my approach generally works.

When someone asks for help and then immediately dismisses my initial recommendations without even trying them, how can I help them?  Maybe they don't know who I am or what experience I have, and they're skeptical of my suggestions.  What can I do then?

Do I gently explain that I've worked with over 400 customers in this specific domain, and that my anecdotal statistics would not support the assertion that their integration is infallible or that Dynamics GP is at fault?  Is it my job to convince them that I tend to have a fairly good grasp of the subject matter and that they should reconsider my suggestion?  Is there any point in arguing with someone who has asked for help, but isn't accepting my help?

"Experts" don't know everything and can't always immediately pinpoint causes or solutions.  But if they ask questions, ask for more information, or ask you to test something, isn't it in your best interest to at least try working with them?  If you're not willing to work with an expert, what are your alternatives?

Instead of immediately ruling out suggestions, welcome them as opportunities to learn. Collect new data. Make new assessments. Understand what they are thinking.

Be inquisitive and curious and humble. Don't be defensive or righteous. This applies to the person asking for help, as well as the expert being asked.


Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+





Viewing all 240 articles
Browse latest View live




Latest Images