Quantcast
Channel: Dynamics GP Land : forecaster
Viewing all 240 articles
Browse latest View live

The value of proactive integration logging and error notifications

$
0
0
By Steve Endow

Logging is often an afterthought with Dynamics GP integrations.  Custom integrations often do not have any logging, and while integration tools may log by default, they often log cryptic information, or log lots of events that are not meaningful to the user.

A few years ago I developed a custom RESTful JSON web service API for Dynamics GP that would allow a customer to submit data from their PHP based operational system to Dynamics GP.  They needed to save new customer information and credit card or ACH payment information to Dynamics GP, and they wanted to submit the data in real time.

I originally developed the integration with my standard logging to daily text log files.  While application logging purists (yes, they do exist) would probably criticize this method, my 12+ years of experience doing this has made it very clear that the simple text log file is by far the most appropriate solution for the Dynamics GP customers that I work with.  Let's just say that Splunk is not an option.

The GP integration worked great, and the logging was dutifully working in the background, unnoticed.  After a few months, the customer observed some performance issues with the GP integration, so I enhanced the logging to include more detailed information that would allow us to quickly identify performance issues and troubleshoot them.  In addition to enhancing the detail that was logged, I added some proactive measures to the logging.  I started tracking any delays in the GP integration, which were logged, and I added email notification in case any errors or delays were encountered.

The logging has worked very well, and has allowed us to identify several very complex issues that would have been impossible to diagnose without detailed, millisecond level logging.

Today there was a great demonstration of the value of the integration logging, and more importantly, the proactive nature of the error notification process.

This is an email that was sent to the finance department at 9:18am Central time.  It notifies the users that an error has occurred, the nature of the error, and recent lines from the log to help me quickly troubleshoot the issue.  The user won't be able to understand all of the details, but they will know within seconds that there was a problem, and they will see the customer record that had the problem.


Subject: GP Web Service - Registration Error - PROD

The Dynamics GP Web Service encountered the following errors on 11/21/2016 9:18:22 AM: 

SubmitRegistration for customer Acme Supply Co exceeded the timeout threshold: 15.61 seconds

Here are the most recent lines from the log file:

11/21/2016 09:18:06.505: 10.0.0.66 SubmitRegistration called for customer Acme Supply Co (client, Credit Card)
11/21/2016 09:18:06.505:(0.00) SubmitRegistration - ValidRegistrationHMAC returned True
11/21/2016 09:18:06.505:(0.00) RegistrationRequest started for customer Acme Supply Co
11/21/2016 09:18:06.739:(0.22) ImportCustomer returned True
11/21/2016 09:18:06.786:(0.28) InsertCustomerEmailOptions returned True
11/21/2016 09:18:22.43:       (15.53) Non-Agency ImportAuthNet returned True
11/21/2016 09:18:22.121:(15.60) Non-Agency ImportAzox returned True
11/21/2016 09:18:22.121:(15.60) RegistrationRequest completed
11/21/2016 09:18:22.121:(15.61) SubmitRegistration - RegistrationRequest returned True
11/21/2016 09:18:22.121: WARNING: SubmitRegistration elapsed time: 15.61


Just by glancing at the email, I was able to tell the customer that the delay was due to Authorize.net.  The log shows that a single call to Authorize.net took over 15 seconds to complete.  This pushed the total processing time over the 10 second threshold, which triggers a timeout error notification.

Subsequent timeout errors that occurred throughout the morning also showed delays with Authorize.net.  We checked the Authorize.net status web page, but there were no issues listed.  We informed the client of the cause of the issue, and let them know that we had the choice of waiting to see if the problems went away, or submitting a support case with Authorize.net.

The client chose to wait, and sure enough, at 10:35am Central time, Authorize.net posted a status update on Twitter about the issue.


That was followed by further status updates on the Authorize.net web site, with a resolution implemented by 11:18am Central time.


Because of the proactive logging and notification, the customer knew about an issue with one of the largest payment gateways within seconds, which was over an hour before Authorize.net notified customers.

We didn't have to panic, speculate, or waste time trying fixes that wouldn't resolve the issue (a sysadmin reflexively recommended rebooting servers).  The users knew of the issue immediately, and within minutes of receiving the diagnosis, they were able to adjust their workflow accordingly.

While in this case, we weren't able to directly resolve the issue with the external provider, the logging saved the entire team many hours of potentially wasted time.

So if you use integrations, particularly automated processes, meaningful logging and proactive notifications are essential to reducing the effort and costs associated with supporting those integrations.



You can also find him on Google+ and Twitter








Multiple Fixed Asset Calendars

$
0
0
So many of you may already be aware that GP now has the capability to handle different calendars for different fixed asset books.  For example, your corporate book could be based on your fiscal year while your tax books could be based on a calendar year.  The calendars are managed in Fixed Assets under Setup, then Calendar.  The system comes with a Default calendar that is assigned to books by default.  However, until you run depreciation, you can change the calendar associated with a book (set up new ones). Once you run depreciation, however, you will have to set up a new book if you want to change the assigned calendar.


With the multi-calendar functionality, dealing with short or long years due to a fiscal year change has become much simpler.  In the calendar setup window, you now have options for these situations:




If the selected year needs to be either short or long, simply mark the option for that year (make sure you have the correct year selected).  Then you need to specify how much depreciation you want to take in the elongated year (100% would be the norm for a 12 period year).  So, for example, if you extended the year by 6 months then you might enter 150%.  Or if you have a short year of 6 months, you would enter 50% of the full year depreciation.  Easy Peasy Lemon Squeezy, right?


I also highlighted the options to build your future years based on the fiscal period setup.  You will want to do this so that they are synced to your new fiscal calendar including the prior year setup, the short/long year, and the future year setup (just make sure you have a future year setup with the normal fiscal year).


Assuming when you make these changes that you are not actually changing the depreciation to be taken in a period that has already been processed in Fixed Assets, there is no need to run a reset on the assets. 


Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Do you EFT but still print remittances?

$
0
0
Sometimes when it rains, it pours.  It seems like requests come in waives, and in the last two weeks I have had 4 separate clients ask about or implement emailing remittances.  It seems like such an obvious thing, because if you are avoiding printing checks-- why wouldn't you also want to avoid printing remittances as well?

The good news is that it is super simple to set up.  Assuming you want the emails to be routed directly through exchange (and not through a local email client), you first need an account to be used for the sending of the emails.  And second, your exchange server needs to have the auto discover option enabled.  Then it is really as simple as the following 4 steps...


1. Admin-Setup-System-System Preferences, select Exchange for the email option
2. Admin-Setup-Company-Email Message Setup, create a message ID and message for the emails
3. Admin-Setup-Company-Email Settings, set options for emails (including document type) and then click Purchasing Series to enable and specify the email message ID for remittances
4. Cards-Purchasing-Vendor, enter email addresses using the Internet Addresses (blue/green globe) for the remit to address (use the To, CC, and BCC fields as appropriate) and then enable the email remittance under the Email Settings button for the vendor


Once you have these steps completed, it is as simple as choosing to email remittance forms when you are in the Process Remittance window (Transactions-Purchasing-Process Remittance).  Keep in mind, I definitely recommend doing this first with a single vendor using your own email address.  As you may want to tweak the format and/or the email message.


Happy emailing!

Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Rare eConnect taPMTransactionInsert error 311 and 312: Tax Schedule ID does not exist

$
0
0
By Steve Endow

Here are two obscure eConnect errors that you should never encounter.  Unlike my customer who did encounter them.


Error Number = 311 Stored Procedure= taPMTransactionInsert Error Description = Misc Tax Schedule ID (MSCSCHID) does not exist in the Sales/Purchse Tax Schedule Master Table – TX00102
MSCSCHID = Note: This parameter was not passed in, no value for the parameter will be returned.


Error Number = 312 Stored Procedure= taPMTransactionInsert Error Description = Freight Tax Schedule ID (FRTSCHID) does not exist in the Sales/Purchases Tax Schedule Master Table – TX00102
FRTSCHID = Note: This parameter was not passed in, no value for the parameter will be returned.


Notice that the error says that the tax schedule ID does not exist, but then says that no value was passed in for the tax schedule ID.

So, if you are sending in a blank tax schedule ID value to eConnect, how can it be invalid, and thus cause this error?

As with many eConnect errors like this, the error is not caused by what you send to eConnect.  It's caused by some value or configuration option buried deep in Dynamics GP, that is impossible to figure out based on the eConnect error alone.

Here is the validation script that triggers the 311 error:

IF ( @I_vMSCSCHID <> '' )
    BEGIN
        IF NOT EXISTS ( SELECT  1
                        FROM    TX00102 (NOLOCK)
                        WHERE   TAXSCHID = @I_vMSCSCHID )
            BEGIN
                SELECT  @O_iErrorState = 311;
                EXEC @iStatus = taUpdateString @O_iErrorState,
                    @oErrString, @oErrString OUTPUT,
                    @O_oErrorState OUTPUT;
            END;
    END;


This would seem to make sense--if a Misc Tax Schedule ID value was passed in, verify that it exists in the TX00102 tax table.

But...what if you aren't passing in a Misc Tax Schedule ID--which our error message above indicates?

Well, we then need to dig a little deeper to find out where a value is being set for @I_vMSCSCHID.  And we find this:

SELECT  @I_vPCHSCHID = CASE WHEN ( @I_vPCHSCHID = '' )
                            THEN PCHSCHID
                            ELSE @I_vPCHSCHID
                        END,
        @I_vMSCSCHID = CASE WHEN ( @I_vMSCSCHID = '' )
                            THEN MSCSCHID
                            ELSE @I_vMSCSCHID
                        END,
        @I_vFRTSCHID = CASE WHEN ( @I_vFRTSCHID = '' )
                            THEN FRTSCHID
                            ELSE @I_vFRTSCHID
                        END
FROM    PM40100 (NOLOCK)
WHERE   UNIQKEY = '1';


So what does this tell us?  If no tax schedules are passed into taPMTransactionInsert, eConnect tries to get default Tax Schedule IDs from the Payables Setup table, PM40100.  Once it gets those Tax Schedule IDs, it validates them.

So...how could that cause the error we're seeing?

Figured it out yet?

The only way the default Tax Schedule IDs in PM40100 could cause the error would be if those default Tax Schedule IDs are INVALID!

Wait a minute.  How could the default tax schedule IDs in the Payables Setup Options window be invalid, you ask?  The Payables Setup Options window validates those at the field level--the window won't let you enter an invalid value or save an invalid value.

So, that leaves either a direct SQL update to set an invalid value in PM40100, or perhaps more likely, someone ran a SQL delete to remove records from TX00102.  My guess is that someone figured they didn't need a bunch of pesky tax schedules, or wanted to change some tax schedule IDs, and they didn't realize that the PM40100 was also storing the tax schedule IDs.

I've asked the consultant to run this query to check the tax schedule IDs setup in PM40100.

SELECT  pm.PCHSCHID, 
(SELECT COUNT(*) FROM TX00102 WHERE TAXSCHID = pm.PCHSCHID) AS PurchIDExists, 
pm.MSCSCHID, 
(SELECT COUNT(*) FROM TX00102 WHERE TAXSCHID = pm.MSCSCHID) AS MiscIDExists, 
pm.FRTSCHID,
(SELECT COUNT(*) FROM TX00102 WHERE TAXSCHID = pm.FRTSCHID) AS FrtIDExists 
FROM PM40100 pm 



If the tax schedules have values, but the "IDExists" fields have a value of 0, then that means there are no matching records in TX00102, and that the values are invalid.



And that is the solution to your mystery eConnect error of the week!


You can also find him on Google+ and Twitter




Things to Consider- Chart of Accounts

$
0
0
During the course of a new implementation of Dynamics GP, we usually have a discussion surrounding the chart of accounts.  Do you want to change it? If so, how?  How well does it work for you today?  And clients sometimes vary in their willingness to explore changing it.  Some are open to discussion, to see how they might tweak it to better support their needs, while others are satisfied with what they use today.  From time to time, we also find ourselves discussing the chart of accounts structure with clients who have been on Dynamics GP for a number of years or even decades.  In those cases, the company may have grown and the reporting needs have also changed.


I thought it might be worthwhile to share some of my own discussion points when exploring the chart of accounts structure with both new and longtime Dynamics GP users.  So where do I start? I always start with the desired end result...Reporting! So let's start there, and then toss in all my other typical considerations...


  • What are the current and desired reporting needs?  How are reports divided/segmented (departmental, divisional, etc)?  Are the lowest levels for reporting represented in the chart of accounts today?  How about summary levels?  Do the summary levels change in terms of organization over time (so maybe they shouldn't be in the chart of accounts structure)? Is there reporting and/or other tracking in Excel that should be accommodated by the chart of accounts structure so that the reporting can be automated?
  • What about budgeting?  What level does budgeting occur at?  Is that represented? 
  • What about other analytics? Are the components available in the chart of accounts?  Are there statistical variables?  Are they in Dynamics GP as unit accounts?
  • How does payroll flow to the general ledger, does it align to the chart of accounts (e.g., departments, positions, codes, do they match up)?  Is there an expectation of payroll reporting from the general ledger in terms of benefit costs, employee costs, etc?  Are those levels represented in the chart of accounts?
  • Are your segments consistent?  Does a value in department mean the same thing across all accounts?  Or do you need to look at multiple segments to determine the meaning (e.g., department 10 with location 20 means something different than department 10 with location 40)?  Consistency is a goal whenever possible to facilitate reporting.
  • How about your main accounts?  Review a distinct list?  Are they logical, in order, and follow the norm (e.g., expenses in the 6000s)?  Is there room to add main accounts?  Are there duplicated/inconsistent main accounts?
  • Do you do allocations?  If so, how and by what factors?  Can we use fixed or variable allocations to facilitate in GP?  Do we have the needed components in the chart of accounts to determine what to allocate from and to?  Do you want to offset the allocation in separate accounts to see the in/out of the allocation?


Anything I missed?  Thoughts, comments?  Please share and I will update the list!


Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Approaching Acquisitions with Dynamics GP

$
0
0
Sometimes in this line of work, you get so used to doing thing one way, it take a bit of jolt to remind you that there are other ways to approach things.  Sometimes that jolt comes from a coworker's comment, or a client  asking a question.  I like those moments, because they encourage innovative, creative thinking.  And innovative, creative thinking challenges me and, honestly, makes this job a whole lot more fun!

One example of this sort of situation is with acquisitions.  Specifically, situations where a company on GP  is acquired but has plans to stay on GP through the transition.   Typically, this means the following...


  1. Identify closing date of acquisition
  2. Set up a new company database
  3. Transfer over acquired assets and balances as of the transition date

This approach works well when  you are dealing with a single company. Or maybe a couple.   It works because its...

  1. Straightforward
  2. Relative simple
  3. Clean (the client can  keep the  history of the former company in the old company while the new company starts fresh)
Where this  process doesn't work so well is when we starting talking about...

  1. Lots o' companies
  2. Lots o' data/modules/customizations/integrations
In these cases, the idea of setting up multiple brand new companies, copying data, ensuring that customizations/integrations work, can be a bit daunting in the midst of an acquisition.  This is doubly true if the customizations/integrations support critical day to day business operations.  Those of you that know me know that I don't believe that just because something is "daunting" that we shouldn't do it.  But these "daunting" things do mean we have to approach the project with a higher level of due diligence in advance as well as project management during to mitigate risks.

So what about another option?  Can we avoid setting up all these new companies?  Yes, we can.  It just requires a bit more creative thinking.  As an alternative, we can approach it like this...

  1. Continue forward with same companies
  2. Backup companies at transition date to create historical companies
  3. Remove history as needed from live companies
  4. Enter manual closing entries as of the transition date (assuming fiscal year is not changing, and transition is not fiscal year end)
  5. Reset assets and any other balances as needed (this can be the tricky step, involving scripts to original cost basis = net cost basis, etc to move forward)

Now, the process above does require due diligence in advance as well to make sure all transition needs are identified and planned for.  But it can save effort and reduce risk in some cases.  So...a solution to consider.  What other creative/innovative approaches have you seen to handling acquisitions in Dynamics GP?  I'd love to hear from you!


Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Source code control is a process, and processes are prone to mistakes

$
0
0
By Steve Endow

I previously thought of source code control as just another piece of software--an application that you use that manages the versions of your code.  SourceSafe, SVN, Git, Team Foundation Server, there are many options for software or services that will take care of source code control / version control for you.  As long as you "use" one of those solutions, you're all set.

But today I learned a pretty significant lesson.  It is probably obvious for many people, but it was a bit of a wake up call for me.

"Source code control" is not Git or Team Foundation Server or VisualSVN.  Those are just tools that are just one piece of a larger process.  And regardless of which tool you use or how great that tool is, if you don't have a solid, resilient process surrounding that tool, you will likely experience breakdowns.

Last week I sent out a new version of a Dynamics GP customization.  The new version only had a few minor changes to enhance error handling--I added several try/catch blocks to try and track down an intermittent error that the user was seeing.

The customer tried the new version today, but they quickly noticed that a feature that was added over 3 months ago was gone. Uh oh.

While making the latest error handling changes, I noticed some oddities.  The last release was version 1.30, but the projects in Visual Studio were still set as version 1.21.  I checked the version 1.30 files that were released, and they were versioned properly, so something happened that caused the Visual Studio projects to revert to version 1.21.  I wouldn't have done that manually.

I then checked my Documentation.cs file that I maintain on all of my projects.  The last note I had was for version 1.21.  No notes on version 1.30.  That's not like me, as that's usually my first step when updating a project.

I then checked the Git branch of the project.  Visual Studio was using branch 1.31, but it was only a local branch and hadn't been published to BitBucket.  1.30 was published, but it didn't have any notes on version 1.30 in my local repository or on BitBucket.

I checked the Commit log on BitBucket, and that left me even more puzzled. I didn't seem to have any commits acknowledging the version 1.30 release.


I see check ins for v1.2 and v1.21, and the new v1.31 release, but nothing for v1.30.

Somehow I had produced a version 1.30, with correct version numbers in Visual Studio, which produced properly versioned DLLs, which got released to the customer, but I have the following problems:

1. I either didn't update my Documentation.cs file, or it somehow got reverted to a prior release, causing my changes to be wiped

2. Somehow my Visual Studio project version numbers got reverted from 1.30 to 1.21

3. I can't find any record in the code for the version 1.30 changes

4. Despite having v1.30 and v1.31 branches in Git, I didn't see any changes when comparing them to each other, or to v1.21.

5. I can't find any evidence of a version 1.30 release in BitBucket


The only evidence I have of a version 1.30 release is the separate release folder I maintain on my workstation, where I did document it in the release notes.


And I see that the DLLs were definitely version 1.30, so I'm not completely imagining things.


So somehow, I managed to make the following mistakes:

1. Potentially reverted my code to a prior release and lost some changes

2. Didn't clearly perform a v1.30 check in, or if I did, my commit comments did not indicate the version number like I usually (almost always) do

3. Created a v1.31 branch for an unknown reason that I didn't publish and didn't document.

4. Somehow made what is likely a series of several small mistakes that resulted in the versioning problem that I'm trying to fix today.


The most frustrating part is that it isn't obvious to me how such a roll back could have happened.

And all of this despite the fact that I'm using an excellent IDE (Visual Studio), an amazing version control system (Git), and a fantastic online source code management service (BitBucket).

My problems today have nothing to do with the tools I'm using.  They clearly stem from one or more breakdowns in my process.  And this was just me working on a small project.  Imagine the complexities and the mistakes and the issues that come up when there are 15 people working on a complex development project?

So today I learned that I have a process issue.  Maybe I was tired, maybe I was distracted, but clearly I forgot to complete multiple steps in my process, or somehow managed to revert my code and wipe out the work that I did.

I now see that I need to invest in my process.  I need to automate, reduce the number of steps, reduce the number of manual mistakes I can make, and make it easier for me to use the great tools that I have.

I don't know how to do that yet, but I'm pretty sure that much smarter people than I have had this same issue and come up with some good solutions.



You can also find him on Google+ and Twitter




What Do You Know About Your Customizations?

$
0
0
When I first started consulting over 15 years ago, it seemed like we didn't come across that much customization.   Maybe it was because we didn't have a developer on staff, maybe it was the nature of the clients we served, or maybe it was just indicative of the time.  But over the past 15 years, I've seen a growth in customization and integration on even the smallest of projects.  I attribute this to a number of things including the growing sophistication (and therefore) expectations of clients (even on the smaller end), the release of additional development tools that decrease the effort involved, and even a changing mindset that customization can help "unleash" the potential of your software. For whatever the reason, it seems like customization at some level has become a norm of sorts. 


Let me also add that when I say "customization" in this post, I am including integrations and interfaces between systems as well.


With clients we have implemented, and those we have picked up over time, the tendency  seems to be to "trust the professionals" with the customizations.  While I agree with this on one level, in terms of who should be doing the actual development-- I also would emphasize that every client/power user needs to understand their customizations on several levels.  This due diligence on the client/power user side can help ensure that the customization...


  • works in a practical sense for your everyday business
  • is built on technology that you understand at a high level
  • can grow with your business
  • is understood in a way that can be communicated throughout the user base (and for future users)
I often find that over time, the understanding of a customization can become lost in an organization.  Give it 5 years, and current users will bemoan that they don't understand...


  • why they have customization
  • what the customization does
  • how the customization can be adjusted for new needs
While the IT admins will bemoan a lack of understanding of how to support the customization effectively and/or perpetuate misunderstandings regarding the technology and capabilities.


None of this is anyone's fault necessarily, but it does emphasize the need for due diligence anytime you engage with a consultant or developer for a customization (and even a worthwhile endeavor to review your existing customization).  What are the key  parts of the due diligence I would recommend?  Well, you KNEW I was going to get to that!  So here you go...


  1. Every customization you have should have a specification. It doesn't have to be fancy, but it does need to be a document that contains an explanation of the functionality of the customization as well as the technology to be used to develop it.   Ideally, it should also contain contact information regarding the developer.  I'll be honest, I run in to clients wanting to skip this step more than consultants and developers. I suppose this has to do with not seeing the value of this step, seeing it as a way for consultants to bill more.  But this step has the greatest impact on a client's ability to understand what they are paying for, and minimizing miscommunication and missed expectations.  If you don't have them for your existing customizations, ask for them to be written now (either internally or by the consultants/developers). On another side note, somewhere the actual process for using the customization should be documented as well. Sometimes this is added to the spec, sometimes if is added to other process documentation. But make sure it happens, so that the customization is included in training efforts internally.
  2. Understand at a high level what technology is being used to develop the customization.  Why do you need to know this?  Well, you need to understand what is involved in upgrading the customization for new versions?  How about adjusting or adding functionality?  Will it mean code writing, or something simpler?  How about source code?  Does the customization have  source code, and who owns it (in most cases not the client but the developer/company retains ownership)?  What does that mean if the developer stops working for the company?  Or if you change companies?  Will there be a detailed technical design document to be used if other developers need to be engaged? And is the technology specialized (e.g., Dexterity) or more common (e.g., VB, .Net, etc). All are important questions that impact the longevity and flexibility of the customization.
  3. Conduct full testing with a method for collecting feedback internally, so that you can ensure that the customization indeed meets expectations and enhances the user experience.  It is not uncommon for a customization to be developed per the specification but in practice still need adjustments to make it truly useful for the users.  When this happens, clients will sometimes "stall out" out of fear of additional costs.  Even though, in the long run, the additional costs that might be incurred at this stage could save frustration as well as future replacement costs when the customization is abandoned.  Just make sure during this point in the project, that the spec and process documentation are updated with changes.
What else would you add to the due diligence for clients and customizations? Let me know!


Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.



Portable DIY Surface Pro Table Top Stand / Lectern: Computer Woodworking

$
0
0
By Steve Endow

Last week MVP Jen Kuntz posted a neat update on Twitter with some photos of a cool sliding door that she built.

Following her lead on the woodworking post, I thought I would write a post about a small woodworking project that I worked on today.  Computer related, no less!

I needed some type of table top stand for my Surface Pro 4.  I have a situation where I need to work on my Surface Pro while standing, but the space where I'll be working only has a small table.

I didn't want, or need, a typical boxy table-top lectern.  I wanted something simple, compact, portable and light, that I could quickly setup for use, and then easily fold up and put away.  Unlike a typical lectern with an angled top, I wanted a flat surface so that my Surface Pro and my mouse would not slide off. (If you've done presentations with a typical angled lectern, you know what I'm talking about.)

I fired up SketchUp and quickly came up with this simple design, which has a flat top and folding support legs.  I wanted to keep it as simple as possible so that I could quickly build it this afternoon with as little wood and as little effort as possible.

The folding legs would be attached to the back piece with hinges so that they could be moved into place to support the top, and the top would be on a hinge as well, allowing it to fold down.


After some initial testing, I realized I needed to add a folding stand in the back to prevent it from tipping backwards.


With the legs folded flat, the hinged top folds down flat, and the top has a convenient carry handle.  I figured this would make it very easy to setup, and then I could fold it up in 2 seconds and easily store it out of the way, taking up minimal room.


With my rough design in hand, I headed out to the wood pile. Um, I mean my garage.  If you are a woodworker, or know any woodworkers, you probably know that we hate to throw away perfectly good scraps of wood.  You never know when that small off cut will come in handy!

Fortunately, I had the perfect scraps for the project.  I had a scrap of maple plywood that was almost exactly the dimensions of the top, a nice piece of poplar for the center back, and I had just enough select pine scraps for the folding legs.

The select pine was slightly narrower then my SketchUp design, so I had to adjust my dimensions a bit on the fly, but it worked out just fine.


I cut the pieces to length on the miter saw, and things were looking good.  To save time, I didn't bother to taper the legs, like what is shown in the design.


To join the folding legs, I used my Festool Domino, but pocket hole screws would probably work fine as well.


The Domino is a bit tedious to setup, but the results are Extra Fancy.


With the legs glued and assembled, I clamped them up and then moved on to work on the top piece.


The scrap of plywood was so close to my design dimensions that I didn't even have to cut it--it was ready to go.  I just needed to cut the handle out.

I sketched out the area for the handle and I used a large forstner bit to start the handle hole.



At this point, most people would use a jigsaw to cut the piece between the two holes, but 1) I absolutely hate the jigsaw, and 2) I got a new compact router recently, so I figured I would take the path less traveled and cut out the handle with a spiral up cut bit.


So the router was an interesting choice.  The cut didn't turn out perfect, but it was convenient and good enough for this project.

I then got Extra Fancy and chamfered the edges of the handle--again, another excuse to use the router.


Then, every woodworker's least favorite task--sanding--to remove any rough edges.


More chamfering around the edges of the top. Because new router!


With all of the pieces done, I did a dry fit of sorts, just to make sure everything looked right.


Then a quick run to Home Depot to pick up some hinges.  If you want to get Extra Fancy, you could go with piano hinge for just a few dollars more, but I didn't want to spend time cutting the piano hinge, so I opted for the ugly utilitarian hinges.


And with all of the hinges in place, the stand worked perfectly.


And it folded up nice and flat.


It's very light weight and the handle makes it really easy to carry.


A quick test on a table confirmed that it worked great with my Surface Pro and mouse.


During my initial testing, I noticed that it could potentially tip backwards, so I grabbed another small scrap of plywood (perfect size!) and with the one remaining hinge, added the extra stand on the back to prevent it from tipping over.


To finish it off with a touch of Extra Fancy, I'm going to counter sink a few neodymium magnets into the top of the legs and bottom of the table so that the legs will pop into place and be held by the magnets.  I'll probably also add a magnet to the stand on the back to keep it folded flat when closed.

I hope you enjoyed this computer woodworking fusion project!



You can also find him on Google+ and Twitter




What Are Your Software Resolutions?

$
0
0
Some of my favorite clients, when I walk in their door every few months, ask "What's New Out There?", "What Are People Doing?"  I will admit, I just love the continual growth mindset.  Although it does take time and energy (and money) to leverage your software to its fullest potential, I find that clients who take this on as part of software ownership are generally happier and more satisfied than those who tend to stagnate- never looking at new approaches, add-ons, or taking care to expand their use of new functionality as appropriate. 


So along these lines, I thought I would put together my top 5 software resolutions.  Although written with Dynamics GP and CRM in mind, these really can apply to a myriad of software solutions and vendor relationships you may have.


  1. Stop expecting software to do more without you contributing more: Whether it is time, expertise, or money (in the form of consulting dollars or add-on software), your software package will only expand and do more for you if you are willing to contribute.  Some of my clients who do the best with this resolution have monthly GP user meetings (internally) to discuss issues and goals and also participate in GPUG and other groups to knowledge-share.  In organizations that don't regularly do this, it's not unusual to hear about them simply implementing another product a few years down the road and starting the cycle again.
  2. Build a partnership with your VAR/Consultant.  No one likes to have a combative relationship (consultants, too).  Understand that your partner is there to help you, and in most cases wants to make sure you are happy with them as well as the software.  So look at how you engage with them, do you do it in a proactive way? Do you ask them what they think of how you are using the software?  Ask for their help in more strategic ways, like how you might better use support or even cut your support costs through training or other avenues.
  3. Set a budget for ongoing software enhancement.  And I am not just talking about service packs and upgrades, although it can be bundled in with those costs.  With each new release, there is new functionality and we (partners/consultants) want you to be able to take advantage of it.  But in a lot of cases, clients simply budget upgrades like service packs with no consulting services beyond the upgrade.  Even consider inviting your consultant/partner out once a year for the sole purpose of asking "What could we be doing better with our software and processes?".  You might be surprised by their answering.
  4. Reset your approach to training to be an ongoing process, not a one time event.  I know users who have used GP for 10+ years but still find training classes, webinars, and other events to attend every year and leave excited about how they can improve their use of the software.  Join GPUG.  Go to conferences.  Treat training as something you do every year.  Not just when you add a new employee or implement a new module.
  5. Recognize that software won't solve all of your issues.  Above I mentioned clients who have monthly internal GP user meetings. These opportunities can also be opened up to include accounting and business processes, even those that fall outside of the system.  What is working?  What isn't?  And can software help? Or do you need to consider internal changes?  Approaching issues with an open mind, and recognizing that sometimes internal/institutional change is needed (with or without software) can help you make positive change in your organization.
What would be on your resolution list? I am interested to hear from you all!


Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Dynamics GP obscurity when voiding AP payments in a high volume environment

$
0
0
By Steve Endow

I seem to frequently work on unusual and obscure tasks and issues in Dynamics GP, and I discovered another one recently.

I have a large Dynamics GP customer that issues thousands of AP payments every month.  The payments are issued to dozens of countries using every payment mechanism imaginable.  Checks, ACH, wires, debit cards, PayPal--you name it.  The payments are issued from both Dynamics GP and through at least one third party international payment processing service.

The company issues so many payments in so many forms that they have a very interesting and challenging problem.  The problem stems from the fact that they regularly encounter situations where the payment is not successfully delivered to the vendor.  Maybe the check was returned as undeliverable.  Perhaps the ACH info wasn't correct.  Maybe the PayPal email address was wrong.  Given the number of different payment methods they use, sometimes they discover this in a few days, while sometimes it takes a few months to be notified that a payment was not successfully delivered.  Given their high payment volume, the challenge this creates is having to void hundreds of payments a month in Dynamics GP so that they can re-issue the payment.

The void process is so time consuming for them that they asked me to develop a solution that could automatically void payments in GP.  I developed that solution, which is a very long story on its own, but in the process of testing, I discovered an unusual scenario that made it difficult to automatically void a payment.

The issue is that it is possible to issue the same check or payment number from multiple checkbooks in Dynamics GP.  This isn't something I had considered before.  So if I pay a vendor with Check 100 from Checkbook 1, and then later happen to pay that same vendor with Check 100 from Checkbook 2, the vendor now has two payments with the same check number.  Given the number of GP checkbooks, the number of payment methods used, and the fact that a third party payment processor is involved, I couldn't rule out this possibility.

Here's an example of what that scenario looks like in the Void Historical Payables Transactions window.


Even if you filter the vendor and the document number, the window displays multiple payments.  In the screen shot, I used the extreme example of payments with the same date and amount.  In this case, the only way to tell the difference between the two payments is by the internal GP Payment Number value.

A user who is manually performing a void would have to select a row in the scrolling window and click on the Document Number link to drill in to the payment and see which Checkbook was used to issue the payment.  But because the Checkbook ID is not shown on the window, an automated solution just looking at the data in the scrolling window cannot tell which payment should be voided.  So I'm probably going to have to enhance the automated solution to verify the date and amount shown in the grid record, and also lookup the payment number to determine which checkbook issued the payment.

One could reasonably say that it is unlikely that a vendor would be issued two payments from two checkbooks with the same payment number.  I would have previously agreed, but the fact that this issue happened to come up in my limited testing on my development server would seem to indicate that it could be more likely than you might think.  And if you've worked with ERP systems long enough, you know that if an obscure problematic situation can arise, it usually will.

I thought that this was a good example of how flexible functionality in Dynamics GP and unexpected or complex scenarios can produce a situation that requires custom solutions to handle unusual situations, even if unlikely.


You can also find him on Twitter, YouTube, and Google+

Benchmarking GL batch posting times in Dynamics GP using DEX_ROW_TS?

$
0
0
By Steve Endow

I just finished a call with a customer who seems to be experiencing relatively slow GL batch posting in Dynamics GP.

We were reviewing records for the GL batch in the GL20000 table, and out of curiosity, I happened to look at the DEX_ROW_TS values.  For a GL batch that had a total of 1,200 lines, the difference between the minimum and maximum DEX_ROW_TS values was just over 60 seconds.  So my interpretation is that it took over 60 seconds for GP to perform the posting and copy the records from GL10000 to GL20000, with the TS field time stamps reflecting that processing time.

There could be many reasons why DEX_ROW_TS isn't the most accurate measure of actual batch posting times, but I was curious if it could be used as a way to roughly and quickly benchmark GL batch posting times.

I didn't know if 60 seconds for a 1,200 line JE was fast or slow, so I performed a few tests on one of my development VMs.  I created two test batches:  One had 150 JEs with 8 lines each, and the other had 300 JEs with 4 lines each.  So each batch had 1,200 lines.  I then ran this query on my batches:


SELECT MAX(ORGNTSRC) AS Batch, COUNT(*) AS Rows, MIN(DEX_ROW_TS) AS StartTime, MAX(DEX_ROW_TS) AS EndTime, DATEDIFF(ss, MIN(DEX_ROW_TS), MAX(DEX_ROW_TS)) AS SecondsElapsed
FROM GL20000 
WHERE ORGNTSRC LIKE 'TEST150'
UNION
SELECT MAX(ORGNTSRC) AS Batch, COUNT(*) AS Rows, MIN(DEX_ROW_TS) AS StartTime, MAX(DEX_ROW_TS) AS EndTime, DATEDIFF(ss, MIN(DEX_ROW_TS), MAX(DEX_ROW_TS)) AS SecondsElapsed
FROM GL20000 
WHERE ORGNTSRC LIKE 'TEST300'



As you can see, my test batches showed DEX_ROW_TS elapsed times of 6 and 8 seconds, respectively.  So my test JEs appear to have posted significantly faster--up to 1/10th the time as the customer.

It's no surprise that my test in the virtually empty TWO database will show faster times than a large production database, but 6 seconds vs. 60 seconds is a pretty big difference.  And having worked with hundreds of customers to automate their Dynamics GP posting processes using Post Master, I am pretty sure that this customer is seeing less than optimal SQL performance, and that I'll be having a few more support calls with them in the future.



You can also find him on Twitter, YouTube, and Google+




6 Tips to Help You Get More Out of SQL Server Management Studio

$
0
0
By Steve Endow

If you use SQL Server Management Studio, I've assembled a few helpful tips that can help you save time and work more efficiently.

Here's a video where I discuss and demonstrate the tips.





1. By far the most valuable time saving tip is to use the Object Explorer Details window in SSMS.  I have had hundreds of GoToMeeting sessions with customers and consultants who only used the Object Explorer pane and weren't familiar with the benefits of the Object Explorer Details window.  If you are using the Object Explorer pane to locate tables or stored procedures, press F7 to open the Details window and save yourself a ton of time.  Check out the video to see how to quickly navigate and search using Object Explorer Details.


2.  When you are testing or debugging complex queries, error messages will be displayed below the query, showing an error message and noting the line number of the error.  If you double click on the error message, SSMS will take you to the line in the query where the error occurred.



3.  A related feature is the option to display line numbers next to the query window, allowing you to easily reference the line numbers in the query.




4.  If you've ever had to copy the results of a query and paste it into Excel, you should definitely use the Copy with Headers feature.  This allows you to easily paste the data with headers into an Excel file.  Just right click on the blank square above row 1 in the query results grid and select Copy with Headers--or press CTRL+SHIFT+C.



5.  Next is a feature that you may not need to use regularly, but may come in handy for situations where you'd like to save the query results directly to a text file rather than paste them into Excel.  Under Options -> Query Results -> SQLServer -> Results to Text, check out the Output format options.  SSMS can format the text query results to be comma or tab delimited.  But a word of caution: the Comma delimited format does not produce a CSV compliant file--it will not put apostrophes around text field values that contain a comma.  So be aware of your data before using that option.



6.  The last tip involves how to fix the scaling of SQL Server Management Studio on a high DPI display, such as a 4K monitor, or a high resolution notebook, like the Surface Pro 4.  In the video, I show you how to install the fixes outlined in this post on SQL Server Central.

Before the fix, notice that the icons on the left are tiny and the text looks crammed into the window.


After the fix, the scaling is completely different, with properly sized icons and readable text.



I hope you learned at least one new trick to help you work more efficiently with SQL Server Management Studio.



You can also find him on Twitter, YouTube, and Google+



A less than ideal Dynamics GP SQL Server setup

$
0
0
By Steve Endow

I recently wrote a post about a customer where Dynamics GP took 10 times longer to post a GL batch than one of my development virtual machines.  So a GL batch with 1200 lines that took 6 seconds to post on my server would take 60 seconds in the customer's environment.

I had another call with the GP partner today to confirm the symptoms and get some information about the customer's SQL Server.  During the call, I saw another GL batch with 1500 lines that took 88 seconds to post.  Not very good.  That's only 17 records per second, which is abysmal performance for SQL Server.

The SQL Server is a quad core machine with 16GB RAM.  The consultant didn't know if the machine was physical or virtual.  The customer has a single production company database with an MDF file that is 20.5GB, and an LDF file that is 14GB.

But, they have a TEST database, which is a recent copy of production, which has a 20.5GB MDF and a 7GB LDF.

And then they have an additional backup copy of their production database for some reason, which has a 25GB MDF and a 14GB LDF.  They also have an old copy of their production database from 2015, which has a 17GB MDF and a 14GB LDF.  And there's another random test copy that has a 14GB MDF.


But wait, there's more!  There is the active Dynamics database, which has a 5.6GB MDF and 4.6GB LDF.  And there is not just one, but TWO other copies of the Dynamics database--one 3.2GB and the other 2.7GB.

So the server only has 16GB of RAM, but there is well over 100GB of online databases on the server.  If we're optimistic, let's say that only two databases actually have any activity: the main production and test companies.  Those two databases, plus the Dynamics database, total over 45GB.

So 45GB of active databases on a server with 16GB of physical RAM.

I then check the SQL Server Maximum Server Memory setting, and no surprise, it had not been changed from the default value.


The combination of insufficient RAM and lack of a reasonable Maximum Server Memory value is likely putting significant memory pressure on Windows, which then contributes to abysmal SQL Server performance.  I've seen a similar SQL Server with just 4 GP users become unresponsive, lock up GP clients, and drop network connections when under load.

The Dynamics GP consultant I spoke with was not familiar with SQL Server configuration or memory management, so I recommended that the consultant speak with his team and the customer about increasing the RAM on the server and setting the Maximum Server Memory setting to a reasonable value.

Unfortunately, I can't be certain that those two items will dramatically improve their GP batch posting performance--although I'm pretty sure it won't hurt.  Maybe the databases need to be reindexed or optimized, or maybe there is some other issue causing the poor performance. If they do upgrade the server memory, I'll try and follow up with them to see if the changes improve Dynamics GP posting performance.

If this topic is of interest to you, I recommend checking out the book Troubleshooting SQL Server by Jonathan Kehayias and Ted Kreuger.  There is a link on the page to download a free PDF of the book.  It's a few years old, but many of the SQL Server fundamentals remain the same.


Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+








Riddle Me This: Fixed Assets and Long Fiscal Year

$
0
0
This one left me scratching my head, so I am up at 2am on a Saturday and thought I  would share.  Here is the scenario...


  1. Customer has a long fiscal year due to a change in their fiscal year
  2. The long  year has 14 periods, all years prior and after have 12 periods
So we adjusted the Fixed Assets Calendar (Setup-Fixed Assets-Calendar) to have 14 periods for the current year.  We also marked the option "Short/Long Year" and specified 116.67% depreciation (so that the 13th and 14th periods depreciate normally).


All ran great when the client depreciated period 13.  It is when we get to period 14 that things seem to go haywire.  When we run depreciate on period 14, it backs out the depreciation for period 13.  Creates a complete reversal entry.  The only items that depreciate properly are those items placed in service in periods 12, 13, and 14.  Odd, right?  Well, wait, it gets better...


I can replicate all of this in sample data on GP2015 (the client is on 2013, so wanted to be as close to that version as possible).  So I started wondering what would happen if I backed out the period 14 depreciation. So I did that.  Re-ran depreciation for period 13, and it backed out the incorrect entry.  But then if I re-ran depreciation for period 14, it calculates correctly.  What?  Why?  Simply backing it out and rerunning it appears to fix the problem.  Not normal, right? 

From what I can tell, it has to do with reset life and perhaps the back out process triggers a recalc of sorts.  Because if I pre-emptively run reset life, period 14 will depreciate properly the first time around.  I think there is some conflicting info out there about the need to run reset life if you are creating a long year, but you heard it hear first...always run reset life if you alter (even just lengthening) a year in fixed assets.


Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Is a Test environment required anymore for Dynamics GP upgrades?

$
0
0
By Steve Endow

I've worked with several customers recently who have upgraded Dynamics GP to a new version without doing any prior testing.  The upgrade was performed in their production environment.  No test environment. No test database upgrade. No testing integrations or customizations prior to the upgrade.  Today I was informed of another customer that will be upgrading to GP 2016 without a test environment--just an upgrade of production.  Which made me wonder...


While there are probably many people who would rail against such an approach, I'm now asking a serious question:  Do you really need a Test environment anymore for a "typical" Dynamics GP upgrade?  I'm guessing that many GP customers could probably upgrade their production environment in place without any significant issues.

Yes, there are customers with complex environments that would definitely benefit from a Test environment, and yes, there are cases where upgrades encounter errors that cause the upgrade to fail, but I suspect there are a large number of GP customers with pretty simple environments where a separate environment and extensive testing is not required and would be difficult to justify.

Years ago, before Microsoft purchased Dynamics GP, GP upgrades could be a harrowing experience.  Both the GP install and upgrade processes involved many steps and the GP installers weren't nearly as refined as they are now.  One of the things I noticed following the Microsoft acquisition was that the GP installation and upgrade process became much simpler, easier, and more reliable.  Whereas I used to always recommend setting up a separate test server and performing a test upgrade first, I have worked with several customers recently who have simply upgraded their production environment without any prior testing of a new version of GP.

If you make sure to take good database backups, have a few GP client backups, and have a thorough upgrade plan that has a solid rollback contingency, is it really necessary to have a separate Test environment and perform a full test upgrade first?

Are there particular modules, customizations, environment considerations, or other factors that you think make a Test environment more import?  Third party modules?  Customizations?  Integrations?  Web client?  On premise vs. hosted?  Lots of data or company databases that causes the upgrade to take a long time?



You can also find him on Twitter, YouTube, and Google+




Integration Manager eConnect error: Could not load type System.Runtime.Diagnostics.ITraceSourceStringProvider

$
0
0
By Steve Endow

This is a quick note about an obscure error that I received when trying to use Integration Manager 2013 to run an eConnect integration.


eConnect error - Could not load type System.Runtime.Diagnostics.ITraceSourceStringProvider


The error isn't particularly meaningful, so there's no way to troubleshoot it directly.  Fortunately the error is unique enough that a Google will provide several results, such as this thread:

https://stackoverflow.com/questions/24291769/could-not-load-type-system-runtime-diagnostics-itracesourcestringprovider


The recommended solution is to install / reinstall .NET 4.5.2.  But I verified that I already had 4.5.2 on my server, so that seemed odd.

Then I re-read the original question--note that the user having the issue reports that they just upgraded their server from Windows 2008 to 2012?

Coincidentally, I also upgraded my server from 2008 R2 to 2012 R2 last week.  It would seem that the Windows upgrade process breaks something about the .NET 4.5 installation.

So I downloaded the .NET 4.5.2 web installer and reinstalled it.


After .NET 4.5.2 was reinstalled, the error went away.  Small victories!



You can also find him on Twitter, YouTube, and Google+






Integration Manager eConnect Adapter Error: SQL Exception Thrown in the GetNextNumber method

$
0
0
By Steve Endow

I was trying to do a test in Integration Manager to confirm that the eConnect GetNextNumber methods work properly.  Simple enough.  I setup a test GL transaction with the eConnect destination adapter import and ran it.  After sitting for several minutes with no activity, I clicked on Stop and after a few more minutes, was finally able to see an error.


SQL Exception Thrown in the GetNextNumber method


Hmmm. Okay, so what is causing this?

I checked my SQL Server, but I didn't see any errors.  I then tried a SQL Profiler trace, but it just displayed some strange queries against the company databases with no references to GetNextNumber.  I checked SQL Activity Monitor, but didn't see any issues there either.  I was stumped.

Then I remembered one setting from when I created the integration.


Notice that the Server Name defaulted to localhost.  Since IM is running on the same machine as my SQL Server, I didn't think twice about it--even though I never use localhost for SQL Server connections (for a good reason, as I am demonstrating).

Well, the problem is that this server uses a SQL Server instance name, not the default SQL instance, so localhost will not work.  I changed the Server Name to the correct SQL Server instance name and the integration ran fine, no issues.

I'm puzzled why IM didn't time out or didn't log meaningful error messages, like Unable to connect to SQL Server, etc.  Fortunately, it was a simple enough fix, once I realized the problem.


You can also find him on Twitter, YouTube, and Google+








Customer Service and Failure

$
0
0
I hate car problems.  This is a fact of my life.  My dad was a car guy.  My brother is a car guy.  But I cringe every time I have to deal with car issues.  Fortunately, we have a mechanic who we trust and have taken both of our cars to for years.  (See the parallels already with a software partner/consultant?).  So, anyway, driving home last Friday my check engine light came on.  I did my normal scary internet searching for some basic things to try, and we cycled through those over the weekend (again, anyone picking up on the parallel to working with your software solution?).


Finally, on Tuesday, we caved and took it to our mechanic.  Who we like, but always secretly cringe because we don't know enough to know how much it will cost to fix.  Our mechanic fixed the issue (for those that are wondering- engine oil pressure sensor malfunction), although naturally it was a bit more than I wanted it to be (I wanted the under $100 fix of course!).  So I am sure by now, you are wondering why (despite the clever parallels) why I am blogging about car problems on a blog devoted to Dynamics GP and software implementation?  Well, it is what came next that I think is a testament to how you think about customer service and approach failures with software and with partners.


On Wednesday morning, I woke up and got myself and the kids ready for the day.  I loaded the car with 20 cases or so of girl scout cookies (our office did a cookie pool to support all of the girl scouts in the office) then we loaded up to head to drop-off and work.  As soon as I started the car, I knew I had a problem- horribly rough idle and then the warning lights started flashing and next thing I know the car won't go faster than 10 mph.  Ugh. Ugh. Ugh.  Transfer cookies and kid and laptop to our other car, and call the mechanic.  When I talked to one of his employees, I was told they would either come out and get it or have it towed.  A couple hours passed, and I had not heard from them so I texted my husband to see if he had.  His text back was a simple "Yes, they came out and its fixed and they are test driving it around the neighborhood."


So there you go.  A mechanic who came to our house (his wife watched the store while he and his employee came out) and fixed something that was not tightened enough after the original repair.  Now, I know that some of you might think "dang right he came to your house to fix his own mistake" but I actually think of it totally differently.


Mistakes are inevitable in our work. We are human.  Software (and automobiles) are complicated.  We multi-task constantly with different clients, project, and even software.  Now, do we expect failures to be common?  No (this would be the first time in many years that we have had to call our mechanic back after a repair).  But I would argue that true customer service lies in how we respond to failures, do we....
  • Take on a proactive mindset?
  • Bring "solutions" to the table?
  • Skip the defensiveness and blame game?
  • Go the extra mile to resolve the issue?
I would argue that how we respond to failure as partners builds customer loyalty because failure is unavoidable at some point in a business relationship because we deal with imperfect people, teams/organizations (clients and partners), and software.


In talking with the project managers where I work, we often discuss that projects will have bumps.  Trying to manage to avoid any bumps at all will leave you exhausted, ineffective, and reactionary.  But by understanding that projects will have bumps (miscommunications, missed expectations, etc) you are "lowering the bar".  You are, in that case, adopting a proactive, pragmatic, and risk-adverse mindset- looking to manage the bumps, how we respond, and how we engage with the client for ultimate project success.


Look for the customer service in the failures.  That is where you will find it.  And that is where you will build the lasting partnerships (both internal and external) that will allow you and your organization to succeed.
Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.

Don't Forget- Standard GP SQL Views

$
0
0
From time to time, we still get inquires from folks building views in SQL server that actually already exist.  So I thought I would post a quick reminder that every SmartList has a SQL view that corresponds and can be used for your own purposes as well (e.g., SQL reports, Excel queries, SmartList Builder, etc).  And, remember, you can link views together as well as to other tables when creating reports.  Just don't modify the standard views (if you need to add to them, just create a new once with the same design and then modify).  Here are some of the most common ones available (this is NOT all of them) on any GP database...


  • AATransactions
  • Accounts
  • AccountSummary
  • AccountTransactions
  • BankTransactions
  • AttendanceDetail
  • BillofMaterials
  • CertificateList
  • Customers
  • EmployeeBenefit
  • Employees
  • EmployeeSummary
  • FixedAssets
  • FixedAssetsBooks
  • FixedAssetsPurchase
  • InventoryPurchaseReceipts
  • ItemQuantities
  • PayrollTransactions
  • PayablesTransactions
  • PurchaseLineItems
  • SalesLineItems
  • Vendors
When I teach beginner reporting classes, I advise students to always "look twice" for a standard view before embarking on creating new views or combining open/history/work tables in a SQL statement (as often the views already do this for you).  Good luck and happy reporting!


Christina Phillips is a Microsoft Certified Trainer and Dynamics GP Certified Professional. She is a director with BKD Technologies, providing training, support, and project management services to new and existing Microsoft Dynamics customers. This blog represents her views only, not those of her employer.
Viewing all 240 articles
Browse latest View live


Latest Images