Quantcast
Channel: Dynamics GP Land : forecaster
Viewing all 240 articles
Browse latest View live

Implementing an Inbox Zero workflow using Outlook on Windows and iPhone

$
0
0
By Steve Endow

Uncle.  I give up.  I have lost the fight. 


Email has won.  I am defeated.

What was once a great tool for communication has become an overbearing hassle that has destroyed my productivity.

I receive around 50 to 75 emails every weekday.  On a very bad day, I'll hit 100 emails.  I've determined that 100 inbound emails a day is completely unmanageable for me.  With my current processes (or lack thereof), I cannot possibly be productive with that many emails coming at me.  The number of responses and tasks from 100 emails prevents me from doing any other work.

If all I did was "manage" my email all day, and do nothing else, I could probably wrangle my Inbox, but I wouldn't get any "real work" done.  When I focus on doing real work and ignore my email for a day, my Inbox explodes.

It isn't just the emails themselves.  It's also that many of the emails have some type of commitment attached to them.


"Hey Steve, please review this thread of 30 cryptic replies below and let me know what you think."

"Here's the 15 page document I created, please proofread it."

"When can you schedule a call?"

"We are getting an error.  What is causing this?"

"Here are links to a forum post and KB article. Does this explain the error I'm getting?"

"How many hours will it take you to do X?"

"I sent you an email earlier?  Did you get my email?  Can you reply to my email?"


People seem to expecting a relatively prompt reply to their emails--because they think their request is most important, naturally, and because I don't have any other work to do, right?


This week, a link to this article appeared in my Twitter feed:

One-Touch to Inbox Zero
By Tiago Forte of Forte Labs



I have heard of Inbox Zero previously, but I had dismissed it as a bit of a gimmick without fully understanding it.

This time, I actually read the article by Tiago Forte and his explanation finally clicked for me.  His examples and analogies made sense, and his emphasis on email as the first step of a more comprehensive communication and productivity workflow helped me build a new interpretation of Inbox Zero.

I previously saw email as the problem, but after reading the article, I suspect that my overflowing Inbox is really just a symptom of other underlying problems.

For example:

1. When it comes to email, I'm not organized--I don't have a system for dealing with the flood of emails.  They just pile up.

2. I'm using email as an organizational system.  As Tiago points out with his mailbox analogy, this doesn't make any sense, and is not effective.

3. I very likely have a capacity problem.  If I fully implement his Inbox Zero workflow, I suspect that my Task list is going to grow out of control.  But that's probably a good thing.  At least then I will better understand my capacity problem, instead of blaming it on the emails flooding my Inbox.


The Inbox Zero article helps address email organization by proposing a simple, but strict workflow for all emails that are in your Inbox.

The Inbox is not a place to work.  It is simply the place where your email arrives and is sorted.  Just as you don't stand in front of your mailbox at home to tear open and pay your bills or read your magazines in your yard, you don't work on your email from your Inbox.  And as he explains, when you visit your mailbox, you don't let your mail pile up in the mailbox--you remove the mail and deal with it elsewhere.

Got it.

Based on the recommendations in the article, I cleared out as many emails as I could from my Inbox, and was left with 149 emails that I probably needed to deal with.  I temporarily moved those to a "To Do" folder for the time being, and I quickly felt a sense of accomplishment with an empty Inbox.


I then started working on setting up the "four downstream systems" to support his Inbox Zero workflow.

1. Calendar
2. Task
3. Reference
4. Read Later

Here is his nice graphic of the workflow:



The challenge was that he uses GMail and OS X, so his recommended tools aren't really applicable to Outlook (using MS Exchange) on Windows, and iPhone with default iOS apps that I use.

And while I have previously tried Evernote, I find that I prefer the more structured nature of OneNote.

So I spent the day figuring out how to implement his workflow in my environment.

Here's what I came up with.


1. Outlook 2016 Mail (MS Exchange) on Windows with a single mailbox, which syncs to iOS Mail on my iPhone / iPad
2. Outlook Calendar on Windows, which syncs to iOS Calendar on my iPhone / iPad
3. Outlook Tasks on Windows, which syncs to iOS Reminders on my iPhone / iPad
4. OneNote on Windows, which syncs to OneNote app on my iPhone / iPad
5. Pocket Chrome Extension on Windows and Pocket app on my iPhone / iPad.  I prefer the article formatting provided by Pocket, so I use it instead of Instapaper.  And the Pocket email service appears to be much better at picking out a URL from an email message, whereas Instapaper seems to require a blank email with only a single URL.

The one downside to Pocket is that the free version includes ads in your reading list, which you have to ignore or hide, unless you want to pay for a Premium subscription.  I'll see how much I use it and whether it's worth the subscription, or whether I want to use Instapaper instead.


To streamline the 6 numbered steps in his workflow, I setup the following steps and shortcuts.

Outlook 2016 apparently doesn't support custom keyboard shortcuts for specific commands, but it allows you to setup "Quick Steps", which can be assigned shortcuts of CTRL+SHIFT+ (0-9).


I setup four Quick Steps to provide keyboard shortcuts for flows 1, 3, 4, and 6.  Note that these Quick Steps only work when you are in Mail view in Outlook, as they operate on the message you have selected.

Here are all 6 of the flows:

1. Archive:  CTRL+SHIFT+9.  This moves the selected or open email to my single archive folder.  I do not categorize messages into different folders, as the Search feature is finally good enough that I Search instead of looking through different folders.  I have been using this Quick Step for years, so it was already done.  I chose the number 9 because I typically press CTRL+SHIFT with my left hand, making 9 easy to press with my right hand.

2. Reply:  Outlook on Windows offers CTRL+R for Reply, and CTRL+SHIFT+R for Reply to All.

3. Add to Calendar:  CTRL+SHIFT+8 creates a new appointment and includes the text of the selected email in the body / note section of the appointment.  I chose to use the message text, as that will be easier for me and any calendar invite recipients to read.


4. Add Task:  CTRL+SHIFT+7 creates a new Task and attaches the selected email to the Task.  For this Quick Step, I chose to attach the email (instead of copy text of the message) because I suspect I will need to open the email so that I have the option to reply, view message attachments, etc. when I work on the task.

5. Add to Reference:  Since I use OneNote, I will click on the Add To OneNote button in the main Outlook ribbon.  I have been using OneNote for several years and have been using this button, so this flow was already in place.


6.  Read Later:  CTRL+SHIFT+6 forwards the selected email to add@getpocket.com, which will automatically add the first URL in the email to your reading list.  Any additional URLs in the email will be ignored, so I may have to edit the emails before sending them to ensure that the desired URL is first in the email.


In addition to setting up these flows, I modified my Outlook Inbox view to sort messages by date ascending, so they would be listed from oldest to newest, per Tiago's recommendation.

As he instructs, I also enabled threading, "Show as Conversations" in Outlook, which is something I have resisted for years.  I didn't like threading because when my Inbox was filled with hundreds of messages, the threading often made it difficult to review messages.  But if my Inbox stays lean, threading should help me better deal with the occasional flurries of Reply All conversations and start with the last message in the thread.



So, in theory, these 6 steps will allow you to very quickly sort through the email in your Inbox and keep it empty.

Okay, I get that.  But there are two concerns I have at the beginning of my Inbox Zero journey, including one which I referenced earlier.

1. Replying to email can be time consuming.  So I suspect I am going to have to come up with criteria for when to reply directly to an email from my Inbox.  Single word replies only?  Yes, No, Okay, Thanks?  What about if someone asks me for 3 dates/times when I'm available for a call next week?  That reply could take a minute to compose. What if they ask me a question that will take 2 minutes for me to answer?  I'll need some way to assess how long it will take to reply.  If it will take more than X seconds or require me to do research or do additional work to reply, I'll move it to a task.

2. What happens when my Inbox is empty, but my Task List keeps growing with the list of emails that I've converted to tasks?  Just as I'm unable to respond to all of my current emails, I have the strong suspicion that I'm going to be unable to get to all of the new Tasks that those same emails are going to create.


Aside from these two items that I'm going to have to figure out, I think it's a good start.

Now that I've got things setup, I need to go through the 149 messages in my temporary To Do folder and use them to practice this new workflow.

Have you implemented an Inbox Zero workflow?  If so, how is it different than Tiago's process that I'm testing?  Are there any other tips, tools, or techniques that you have found helpful for dealing with email and being more productive?


Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+





Dynamics GP Integrations: Eliminate the need for perfection

$
0
0
By Steve Endow

I had a call with a customer this morning to review an error they were seeing with their AP Invoice integration.  The custom integration is moderately complex, importing invoices from 90 retail locations into 30 different Dynamics GP companies, with Intercompany distributions.

There's an interesting twist to this integration.  The invoice data contains a retail store number, but it doesn't tell the import which GP company the invoice belongs to.  And there is nothing in the GP companies indicating which retail stores belong to which database.  Some retail stores have their own dedicated GP company database, while other retail stores are managed together in a GP company database.  The users just know which retail store belongs to which GP company.

So how does the integration figure out which company an invoice belongs to?

We could have created a mapping table, listing each retail store ID number and the corresponding GP company database.  But the problem with mapping tables is that they have to be maintained.  When a new retail store is opened, or a new GP database is created, users will invariably forget to update the custom mapping table.

So for this integration, I tried something new.  The one clue in the invoice data file is a GL account number.  The first segment of the GL account is a two digit number that uniquely identifies the Dynamics GP company.  Like this:

01 = Company A
03 = Company B
06 = Company C
25, 31, 49 = Company D

So, the integration reads the GL expense account assigned to the invoice, and uses that to determine which company the invoice belongs to.

When the integration launches, it queries all of the GP databases to determine which Segment 1 values are used in each database.

DECLARE@INTERIDvarchar(10)=''
DECLARE@SQLvarchar(MAX)=''

DECLAREINTERID_cursorCURSORFOR
SELECTINTERIDFROMDYNAMICS..SY01500

OPENINTERID_cursor
FETCHNEXTFROMINTERID_cursorINTO@INTERID

WHILE@@FETCH_STATUS= 0
BEGIN
       IF@SQL<>''BEGINSET@SQL+=' UNION ';END
       SET@SQL+=' SELECT '''+@INTERID+''' AS INTERID, (SELECT COUNT(DISTINCT ACTNUMBR_1) FROM '+@INTERID+'..GL00100) AS Segment1Values, (SELECT TOP 1 ACTNUMBR_1 FROM '+@INTERID+'..GL00100) AS CompanyID';
       FETCHNEXTFROMINTERID_cursorINTO@INTERID
END

CLOSEINTERID_cursor
DEALLOCATEINTERID_cursor

EXEC(@SQL)


It is then able to use this "mapping" to match invoices to databases based on the GL expense account.

But, this scheme is based on the critical assumption that in Company A, every single GL account will always have a first segment value of 01.  And Company B will always have a segment 1 value of 03.  Or Segment 1 value of 25, 31, and 49 will only ever exist in Company D.  For every account.  No exceptions.

I'll let you guess what happens next.

A user enters a "06" account in Company A.  And another user enters a "01" account in Company B.

Despite the customer's insistence that this would never happen, and that they always make sure that only one unique Segment 1 value is used in each company, someone ends up entering a Segment 1 value in the wrong company.

Am I surprised by this?  Not at all.  Whenever the word "never" is used during integration design discussions, that's always a clue.  I substitute it with "usually" or "mostly".  There are almost always exceptions, whether intentional or unintentional.

So now what?  If the program can't ensure that the Segment 1 values are unique to each company, what can it do?

Well, the second layer is that during the import process, the integration checks the destination company database to verify that the GL account exists.  If it queries Company A for a 06 GL account and doesn't find it, it logs an error and that invoice isn't imported.  This is the error that was logged this morning.

But then what?  The customer insists, again, that they only use the 06 accounts in Company C, so the import must be wrong.  So we run the above query again and find that someone accidentally entered a 06 account in Company A, which confused the import.  And the customer is shocked that such a mistake could happen.  For the third time.

But I'm not satisfied with this process.  Because 6 months from now, it's going to happen again.  And they'll blame the integration again.  And we'll have to manually run the query again and find which account was added to the wrong company.

So let's just assume that this mistake is going to continue to happen and deal with it.  I'm thinking that I need to modify the integration to have it review the results of the query above.  If it finds that 06 is present in more than one GP database, it needs to log an error and let the user know.

"Hey, I found account 06-5555-00 in Company A. That doesn't look right. Please look into it."

This will proactively identify that an issue exists, identify the specific account, identify the company, and give the user enough information to research and resolve the problem.

It assumes the mistake will happen. It eliminates the need for perfection in a complex process in a moderately complex environment, where employees have 90 other things on their minds.  And it should only take a few lines of code--a one time investment that will save time for years into the future.

So why not do this for other possible exceptions and issues?  If you can identify other potential mistakes or errors, why not just code for all of them?  Because there are endless possible exceptions, and it would cost a fortune to handle all of them, most of which will never occur.

I usually release an initial version of an integration, identify the exceptions, and quickly handle errors that do occur.  When a new exception comes up, handle it.  It's usually a surprisingly small number, like 3 or 5 different data issues that cause problems.

So that's my philosophy: Eliminate the need for perfection in integrations whenever possible or practical.


Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+







Are SQL Server subqueries bad? Let's find out!

$
0
0
By Steve Endow

For the past several years, I've noticed that I have generally avoided using subqueries, based on a suspicion that they are probably less efficient than a JOIN.  I do still use them for one-time or ad-hoc queries where performance isn't a concern, but I have been avoiding them for any production queries that I release.  But I haven't done any research to support my suspicion.

This Saturday morning, while working on another SQL query optimization issue, I figured I would try a test to compare the performance of a simple subquery vs. a JOIN.

What do you think?  Do you think that subqueries are typically slower than JOINs?  If so, how much slower?

Here's a YouTube video where I test my sample queries and review the results.



Before doing my test, I searched the interwebs and I found a post (unfortunately the images aren't showing for me) that appears to definitively demonstrate the correlated subqueries perform much worse than similar JOIN queries.  The explanation made sense.

To qualify my test setup: These queries were run on SQL Server 2014 SP1 using Management Studio 17.4 against the GP 2016 Fabrikam/TWO database. And also notable is that I have 73,752 records in my SOP30300 table--which is probably quite a bit more than a vanilla TWO database.  I suspect this is important, as the results may be different for other SQL Server versions, and may vary based on the number of tables and records.




Here is my very simple sample subquery, which I believe is technically called a "correllated subquery", because it uses a value from the main query to filter the results of the subquery. (or the inner query uses a value from the outer query)

The second query produces the same result by using a JOIN.


SETSTATISTICSIO,TIMEON

SELECT i.ITEMNMBR AS ItemNumber,(SELECTCOUNT(*)FROM dbo.SOP30300 WHERE ITEMNMBR = i.ITEMNMBR)AS LineCount FROM IV00101 i

SELECT i.ITEMNMBR,COUNT(sop.ITEMNMBR)AS LineCount
FROM IV00101 i
LEFTOUTERJOIN SOP30300 sop ON sop.ITEMNMBR = i.ITEMNMBR
GROUPBY i.ITEMNMBR

SETSTATISTICSIO,TIMEOFF


Pretty simple--it lists all item numbers, and for each item, counts how many SOP lines exist for that item.

My assumption was that the subquery would be noticeably slower than the JOIN.

To my surprise, the two queries had effectively the same cost and performance.


Interestingly, there are more reads for the subquery version, but the performance is effectively the same--in fact the subquery was slightly faster in this test.  This is not what I expected.

But there's more!

Take a look at the actual execution plan.  Take a good, close look.  What do you see?



I am sure that the first few times I ran the two queries, there WAS a difference in the execution plan, but it still showed a 50%/50% for the two queries--effectively the same cost. But after several runs, I now consistently see the exact same query plan for both queries.  The EXACT same query plan.

WHAT?  This is absolutely not what I expected.

And after running the query a few more times, the statistics are now always identical.  I'm afraid the machines are learning.



So...what does this mean?  Well, I wouldn't call this a universal rule, but, for a simple query, on SQL Server 2014 SP1, with a relatively small result set, it appears that SQL Server is able to figure out that the two different queries are effectively the same, and after running the queries a few times, SQL uses the exact same execution plan for both queries.

Does this mean that subqueries are exactly the same as JOINs?  I would assume the answer is no. If you have a more complex query, a slightly different subquery, more data, more JOINs, or a different set of indexes on the tables, it could be that a subquery produces a wildly different execution plan that is much slower than an equivalent JOIN.  And if you run the same query on a different version of SQL Server, the query optimizer may behave completely differently.

But what this did show me is that correllated subqueries are not ALWAYS more costly than a JOIN. So I don't necessarily have to avoid them if they are easier to use or make query design or prototyping faster or easier.

And now I know!


Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+



Are you "closing the loop" with your Dynamics GP system integrations?

$
0
0
By Steve Endow

I've been developing system integrations for so long that I sometimes forget that some parts of system integration design may not be obvious to customers I work with.

I have been working with a company that is integrating their document management solution into Dynamics GP.  Their software will capture a scan of a document, such as a vendor invoice, their workflow will route the document image, a user will verify the vendor and the expense accounts, and once the workflow is complete, the software will import the invoice data into Dynamics GP using eConnect.  Once the invoice is imported, a GP user can view the Payables Transaction invoice in Dynamics GP, and also open the scanned invoice image directly from GP.

Okay, that sounds pretty straightforward...

The initial version of the integration with Dynamics GP is asynchronous, using XML files.  The document management system exports an XML file containing the "metadata" for the vendor invoice, and the GP import is a scheduled task that regularly picks up those XML files and imports the data into Dynamics GP as transactions.  (Aside: This design was chosen for its simplicity, as the initial version of the integration was a prototype--a proof of concept to explore how the system could be integrated with Dynamics GP.)

Okay, so...your point is...?



In an asynchronous integration like this, how does the document management system know that the invoice imported successfully?  If it did import successfully into GP, how can the document management system know that?  Or if the import failed, how will the document management system know that too?

Ahhhhh...I see where you're going with this...

The non-technical customer contact, who does not regularly design or work with system integrations, hadn't thought about those questions.  Once he thought about them, he realized the value.

What if the Dynamics GP eConnect integration, after it imports each invoice, is able to call the API of the document management system to tell it that the AP invoice imported successfully?  And in the process, the GP integration can also send over the unique GP Voucher Number to the document management system, providing a unique link between the data in both systems.  If the import failed, the GP integration could also send a message back letting it know that an error occurred, and include an error message explaining the cause (vendor not found, GL account not found, etc.).

If that process were added to the integration, the GP integration could "close the loop" with the document management system.

A user in the document management system could then see that an invoice was successfully imported into GP.  The workflow in the document management system could know if an invoice error occurred and re-route the document for review and correction of the data.

A query or report could verify that all of the AP invoices in the document management system had been imported into GP, or could find any invoices that were still not successfully imported.  A query could compare the data in both systems to ensure that all of the voucher numbers stored in the document management system existed in Dynamics GP--because it is possible that an invoice or batch could be accidentally deleted in Dynamics GP.

Okay, cool, that makes sense.

And while discussing the capabilities of the document management system API, I realized that the integration could be enhanced to be synchronous, and could be handled completely by the Dynamics GP integration.

The GP integration could query the document management system API, asking for all new invoices, or all invoices that do not yet have a Voucher Number assigned.  It could then retrieve the metadata for each invoice, import each as an AP voucher in GP, and then save the Voucher Number back to the document management system.

That sounds appealing, but there may be reasons to not get that fancy.  This document management system offers both an on premises version and a cloud version of their product, so the synchronous design may not work, or may not work as well, with the cloud version.

But I offered it as something to consider if there was some benefit of that design.

So when you are integrating a system with Dynamics GP, consider whether you should close the loop between the two systems.


Steve Endow is a Dynamics GP Certified Trainer and Dynamics GP Certified IT Professional in Los Angeles.  He is also the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter




My latest rookie SQL mistake...

$
0
0
By Steve Endow

I just discovered a fun mistake that I made in a SQL script.  It's a rookie mistake, but it's one of those somewhat novel mistakes that I think is easily missed in many projects.

I developed a Dynamics GP SOP Invoice import for a customer using .NET and eConnect.  It has been in use for over 3 years and working great, but recently they finally had a scenario where they uncovered the latent bug.

After reviewing my code and looking at the data that triggered the bug, I found that I had a design flaw in a SQL statement.  The flaw wasn't discovered during testing because I never anticipated a specific use case and boundary condition, so I never tested the scenario, and it took over 3 years for the customer to encounter it.

The customer is unique in that they will import an invoice, such as invoice number 123456, that relates to contract number 123456.  Then a few days later they will need to make an adjustment to the contract, so they will issue a related invoice to add services to the contract.  To help track the related transaction, the new invoice is imported into GP with a suffix on the invoice number, such as 123456-1.  A few days later, they will issue a credit memo to decrease the contract amount, and that CM will be imported as document number 123456-2, etc.  These numeric suffixes are added to the document number by the eConnect import.

Last week, the customer emailed me with a problem.  They were getting this eConnect error:


Error Number = 2221  Stored Procedure= taSopLineIvcInsert  Error Description = Duplicate document number. If adding or updating lines to an existing document, UpdateIfExists must = 1 
Node Identifier Parameters: taSopLineIvcInsert SOPNUMBE = 123456-10  SOPTYPE = 3


Hmmm.  So it looks like invoice 123456-10 already exists in GP.  So why did the import think that the -10 suffix should be used?

This is the SQL that is being used to get the last document number suffix from GP.

declare@SOPNUMBEasvarchar(15)
set@SOPNUMBE='123456-%'

SELECTISNULL(MAX(SOPNUMBE),'')ASSOPNUMBEFROM
(SELECTSOPNUMBEFROMSOP10100WHERESOPNUMBELIKE@SOPNUMBE
UNIONSELECTSOPNUMBEFROMSOP30200WHERESOPNUMBELIKE@SOPNUMBE)asdtSOP


Do you see my mistake?

It's not a syntax issue or a typo--the SQL will run just fine.  The mistake is a design flaw.

Do you see it yet?  (if it's not completely obvious, I'll feel better)

Here's a clue.  If there are 10 or more suffixes for a given contract, the query will always return 123456-9.  So if invoice 123456-10 already exists in GP, the query will still return a MAX value of 123456-9.

That clue probably makes it obvious that my mistake was using MAX directly on the SOPNUMBE field, which is a char field.

The T-SQL documentation doesn't explicitly discuss this use case, but if you've done much programming (and perhaps even just sorting data in Excel) you've probably seen this problem.  When sorting character data, numbers such as 10, 11, 12, 20, 30, etc. all sort before 9.  If the first digit is less than 9, anything from 10 to 100,000 will sort alphabetically before the number 9.

So, when the MAX(SOPNUMBER) function is called, it looks at the invoices from 123456-1 to 123456-10 and declares 123456-9 as the MAX.

Using the MAX function on a char field was my rookie mistake.  And not testing on invoices with more than 10 adjustments was the use case that I didn't think of during the testing process.

So how do I fix this design flaw?  Well, I need to find a way to convert the suffix to a numeric value so that I can find the numeric max value of all of the suffixes. 

There are probably several ways to accomplish this, but here's what I came up with.

SELECTISNULL(MAX(SuffixValue), 0)ASSuffixFROM
(
SELECTCAST(SUBSTRING(SOPNUMBE,CHARINDEX('-',SOPNUMBE)+ 1,LEN(SOPNUMBE)-CHARINDEX('-',SOPNUMBE))ASint)ASSuffixValue
FROMMaxTest
WHERESOPNUMBELIKE'123450-%'
)ASdtSuffix



This version converts the suffix to an integer, and those results go into a derived table so that I can then use MAX.  It looks like this should work.

So I thought I would share this example of how a little mistake can easily be overlooked, resulting in a latent bug that can take years to manifest.  And a rookie mistake, no less!


Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+







Convert DEX_ROW_TS to a local time zone using AT TIME ZONE

$
0
0
By Steve Endow

This week I attended another great webinar by Kendra Little of SQLWorkbooks.com.  (If you aren't familiar with Kendra, check out her free webinars and her excellent catalog of online courses.)

One neat thing that Kendra always seems to do in her webinars and courses is subtly use new(er) SQL Server features.  This week, she just happened to use the AT TIME ZONE statement in one of her queries.  As soon as I saw it, I knew I had to try it with Dynamics GP.

Dynamics GP doesn't have much time zone sensitive data, but one field that I am starting to rely on more frequently is the DEX_ROW_TS field, which is now present in several key GP tables.  This field stores a last updated date time stamp.

DEX_ROW_TS is a bit unique for GP for at least two reasons.  First, it's a rare time stamp field.  While GP has many date fields, those date fields normally have a time of 00:00:00.000--so it's just a date at midnight, with no timestamp.

The second unique thing about DEX_ROW_TS is that it stores the datetime with a UTC timezone offset.  So if you ever query the DEX_ROW_TS field you need to remember that it isn't local time.

I previously wrote a post about DEX_ROW_TS and how to use some SQL date functions to convert the value to your local time zone, but that approach felt a bit like duct tape and twine, and I would have to look up the syntax every time to use it.

Enter the very cool SQL Server 2016 AT TIME ZONE function.  This function makes it very easy to assign a time zone to a datetime value, and then convert it to another time zone.

(I'm calling AT TIME ZONE a function for now because I haven't found a better name for it. It doesn't read like a typical function, but it acts like one, so Function is the best name I have so far. If you know of the proper technical name for it, let me know in the comments below.)




Here is an example where I convert DEX_ROW_TS to Pacific Local Time.


The query first 'assigns' the DEX_ROW_TS value to the UTC time zone:

DEX_ROW_TS 
AT TIME ZONE 'UTC'

And then it adds another time zone to convert that UTC value to Pacific time:

DEX_ROW_TS 
AT TIME ZONE 'UTC'
AT TIME ZONE 'Pacific Standard Time'

In the results, you see that DEX_ROW_TS shows that I updated AARONFIT0001 on Feb 24 at 1:03pm.  But that is UTC time.  After converting to Pacific Standard Time, we see that the update was made on Feb 23 at 5:03am local time.

The function "AT TIME ZONE" is a bit clunky looking, and as I mentioned, the syntax is nothing like a typical function, but it's very easy to remember and easy to use.  The only thing you may need to research are the valid time zone names.

Unfortunately, this feature was introduced with SQL Server 2016, so you will need to be using 2016 or higher to take advantage of it.  But hopefully as customers upgrade GP, they upgrade their SQL Server version so that you can take advantage of this cool feature.

Here is another post discussing AT TIME ZONE:

https://sqlperformance.com/2016/07/sql-plan/at-time-zone


Enjoy!


Steve Endow is a Dynamics GP Certified Trainer and Dynamics GP Certified IT Professional in Los Angeles.  He is also the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Google+ and Twitter






How to improve Dynamics GP with a little bit of VBA

$
0
0
By Steve Endow

I've had a few Dynamics GP customers that purchase software from me every few years, and a few of them have mailed checks to my old mailing address from 4 years ago. How can this happen?

Well, the Dynamics GP Payables Transaction Entry window does not display the vendor Remit To address, so verifying the vendor address is not an obvious natural step in the invoice entry process.  Yes, there is a link to open the Vendor Address Maintenance window, but what if internal controls prevents the user who enters vendor invoices from editing vendor addresses?  The user would need to go through a separate process to verify the vendor address...for every invoice.  Not ideal.

How can VBA help?

In just a few minutes, VBA can be added to the Payables Transaction Entry window to check if the vendor has not had a transaction in over 60 days, prompt the user to verify the vendor address, and even open the Vendor Inquiry window to review the current Remit To address in Dynamics GP.

It's really easy!

Here's a video discussing the background and walking through the entire process of adding the VBA to Dynamics GP.




First, within Dynamics GP, you add the desired windows to Visual Basic by clicking on Tools -> Customize -> Add Current Window to Visual Basic.


After the window is added, I click on Add Fields to Visual Basic and then click on the Vendor ID field.

Since I am assuming that the user entering invoices will not have access to edit vendors or vendor addresses, I'm going to add the Vendor Inquiry window to Visual Basic and add 3 fields on the Inquiry window to VB:  Vendor ID, Address ID, and the Address ID Next button.


Once I have those windows and fields added to Visual Basic, I press CTRL+F11 to open the Dynamics GP Visual Basic Editor.  If you don't have access to the VB Editor, you may not be licensed to use it, or you may not have permissions--in which case, talk with your GP administrator or GP partner.


In the VB Editor window, I'll select the PayablesTransactionEntry window on the left, then select the VendorID field and the AfterUserChanged event.

I wrote the simple VBA code below to demonstrate how you can quickly and easily add some VBA to add some valuable functionality to Dynamics GP to save users time and improve data entry.

The code finds the most document date for any vendor transaction in Dynamics GP, and if that date is over 60 days ago, it opens the Vendor Inquiry window and displays the vendor Remit To address for the user to review and verify.


Private Sub VendorID_AfterUserChanged()
       
    Dim strVendorID As String
    Dim strSQL As String
    Dim dtLastDocDate As Date
   
    strVendorID = VendorID.Value
   
    'Find the most recent document date for the vendor
    strSQL = "SELECT COALESCE(MAX(DOCDATE), '1900-01-01') AS DOCDATE FROM PM00400 WHERE VENDORID = '" & strVendorID & "'"
   
    Set oConn = UserInfoGet.CreateADOConnection
    oConn.DefaultDatabase = UserInfoGet.IntercompanyID
    Set rsResult = oConn.Execute(strSQL)
   
    strLastDocDate = Trim(rsResult.Fields("DOCDATE").Value)
   
    rsResult.Close
   
    'Get the Remit To Address ID for the vendor
    strSQL = "SELECT VADCDTRO FROM PM00200 WHERE VENDORID = '" & strVendorID & "'"
    Set rsResult = oConn.Execute(strSQL)
   
    strRemitID = Trim(rsResult.Fields("VADCDTRO").Value)
   
    rsResult.Close
    oConn.Close
       
    dtLastDocDate = CDate(strLastDocDate)
   
    'If Doc Date is 1/1/1900, vendor has no transactions
    If dtLastDocDate = CDate("1900-01-01") Then
        Exit Sub
    Else
       
        intDays = DateDiff("d", dtLastDocDate, DateTime.Date)
       
        'If the last doc date is > X days ago, display a dialog
        If intDays > 60 Then
       
            msgResult = MsgBox("This vendor has not had a transaction since " & strLastDocDate & " (" & intDays & " days ago)." & vbNewLine & vbNewLine & "Please review the current vendor Remit To address and compare to the invoice address", vbOKOnly, "Verify Vendor Address")
            VendorInquiry.Open
            VendorInquiry.VendorID.Value = strVendorID
            VendorInquiry.Activate
            VendorInquiry.Show
           
            While VendorInquiry.AddressID.Value <> strRemitID
                VendorInquiry.NextButtonWindowArea.Value = 1
            Wend
       
        End If
    End If


End Sub


In just a few minutes, you can have this customization running in Dynamics GP without any additional development tools.

If you have more complex requirements, you can easily add more advanced functionality using VBA.  If you prefer using a separate development tool, you could also develop this customization using .NET or Dexterity, but the appeal of VBA is its simplicity and ease of use.

So if you have some small problem or additional business requirement that you'd like to handle in Dynamics GP, VBA might come in handy.



You can also find him on Twitter, YouTube, and Google+





T-SQL: MAX vs. TOP 1 - Which is better??

$
0
0
By Steve Endow

If you need to get the largest value for a field, should you use MAX in your query?  Or should you use TOP 1 with ORDER BY?



Which is better?  Which is faster?  Is that always true?

Do you think you know the answer?

Place your bets, and then check out my video below, where I compare MAX vs TOP 1 on several Dynamics GP tables.

The results may surprise you!



Did I miss anything or make any mistakes in my testing?  Are there other considerations when choosing between MAX vs. TOP 1?



Here are the queries that I used in my testing.  Note that your results will vary depending on how much data you have in your tables and your SQL Server version.


--MAX vs TOP 1 with ORDER BY

SET STATISTICS IO ON

SELECT MAX(DOCDATE) AS DOCDATE FROM PM30200 WHERE VENDORID = 'ACETRAVE0001'

SELECT TOP 1 DOCDATE FROM PM30200 WHERE VENDORID = 'ACETRAVE0001' ORDER BY DOCDATE DESC    

SET STATISTICS IO OFF


SET STATISTICS IO ON;

WITH cteMaxDate (DOCDATE) AS
(
       SELECT MAX(DOCDATE) FROM PM10000 WHERE VENDORID = 'ACETRAVE0001'
       UNION 
       SELECT MAX(DOCDATE) FROM PM20000 WHERE VENDORID = 'ACETRAVE0001'
       UNION
       SELECT MAX(DOCDATE) FROM PM30200 WHERE VENDORID = 'ACETRAVE0001'
)
SELECT MAX(DOCDATE) AS DOCDATE FROM cteMaxDate;

WITH cteMaxDate2 (DOCDATE) AS
(
       SELECT TOP 1 DOCDATE FROM PM10000 WHERE VENDORID = 'ACETRAVE0001' ORDER BY DOCDATE DESC
       UNION 
       SELECT TOP 1 DOCDATE FROM PM20000 WHERE VENDORID = 'ACETRAVE0001' ORDER BY DOCDATE DESC
       UNION
       SELECT TOP 1 DOCDATE FROM PM30200 WHERE VENDORID = 'ACETRAVE0001' ORDER BY DOCDATE DESC
)

SELECT MAX(DOCDATE) AS DOCDATE FROM cteMaxDate2;

SET STATISTICS IO OFF;



SELECT COUNT(*) FROM SEE30303  --73,069 records

SELECT TOP 10 * FROM SEE30303


SET STATISTICS IO ON;

SELECT MAX(DATE1) AS DATE1 FROM SEE30303 WHERE ITEMNMBR IN ('ARM', 'FTRUB', 'A100', '24X IDE')

SELECT TOP 1 DATE1 FROM SEE30303 WHERE ITEMNMBR IN ('ARM', 'FTRUB', 'A100', '24X IDE') ORDER BY DATE1 DESC

SET STATISTICS IO OFF;



SET STATISTICS IO ON;

SELECT MAX(DATE1) AS DATE1 FROM SEE30303

SELECT TOP 1 DATE1 FROM SEE30303 ORDER BY DATE1 DESC

SET STATISTICS IO OFF;



SET STATISTICS IO ON;

SELECT MAX(DATE1) AS DATE1 FROM SEE30303 OPTION (MAXDOP 1)

SELECT TOP 1 DATE1 FROM SEE30303 ORDER BY DATE1 DESC OPTION (MAXDOP 1)

SET STATISTICS IO OFF;



SELECT COUNT(*) AS Rows FROM IV30500
SELECT TOP 100 * FROM IV30500

SET STATISTICS IO ON;

SELECT MAX(POSTEDDT) AS POSTEDDT FROM IV30500 --OPTION (MAXDOP 1)

SELECT TOP 1 POSTEDDT FROM IV30500 ORDER BY POSTEDDT DESC --OPTION (MAXDOP 1)

SET STATISTICS IO OFF;



SET STATISTICS IO ON;

SELECT MAX(POSTEDDT) AS POSTEDDT FROM IV30500 WHERE ITEMNMBR IN ('ARM', 'FTRUB', '100XLG') AND POSTEDDT BETWEEN '2017-01-01' AND '2017-12-31' --OPTION (MAXDOP 1)

SELECT TOP 1 POSTEDDT FROM IV30500 WHERE ITEMNMBR IN ('ARM', 'FTRUB', '100XLG') AND POSTEDDT BETWEEN '2017-01-01' AND '2017-12-31' ORDER BY POSTEDDT DESC --OPTION (MAXDOP 1)

SET STATISTICS IO OFF;



SET STATISTICS IO ON;

SELECT MAX(TRXSORCE) AS POSTEDDT FROM IV30500 WHERE ITEMNMBR IN ('ARM', 'FTRUB', '100XLG') 

SELECT TOP 1 TRXSORCE FROM IV30500 WHERE ITEMNMBR IN ('ARM', 'FTRUB', '100XLG') ORDER BY TRXSORCE DESC

SET STATISTICS IO OFF;



USE [TWO]
GO
CREATE NONCLUSTERED INDEX NCI_IV30500_ITEMNMBR
ON [dbo].[IV30500] ([ITEMNMBR])
INCLUDE ([TRXSORCE])
GO


USE [TWO]
GO
DROP INDEX IV30500.NCI_IV30500_ITEMNMBR
GO




Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+





I give away my source code to my customers

$
0
0
By Steve Endow

Dynamics GP partners and customers often hire me to develop custom Dynamics GP integrations, GP customizations, Visual Studio Tools AddIns for GP, or even custom web APIs for Dynamics GP.

I develop the solution, usually using .NET, then I prepare a deployment guide and deployment package that can be installed on the customer's servers.  The solution is tested, I fix some bugs and refine the solution, I prepare new deployments that get installed, and once everything looks good, the customer goes live.

Everybody's happy, and I'm all done, right?

Except for one critical piece.

Any guesses?

What about the source code?

"What about the source code?", you might reply.

I diligently check in my source code to Git, and the code is pushed to an online Git repository for safe keeping and accessibility.  And I have backups of backups of backups, both on site and off site.  Great.  So I'm all done, right?

Not really.

What if I disappear?  What if I win the lottery?  What if I decide that this whole modern civilization thing is overrated and go live off the grid?



In reality, I usually work with customers for years.  I've worked with customers as they've upgraded from GP 10 to 2010 to 2013 to 2016.  I've worked with customers where most of the accounting department and IT staff have changed.  I've worked with customers as they cycle through multiple Dynamics GP partners.  I've worked with customers that have been acquired, gone out of business, and migrated to other ERP systems.

I'm not planning on disappearing, but I've seen what happens when a Dynamics GP developer does disappear.  One guy just vanished and the customer couldn't locate him. More often, developers leave a Dynamics GP partner and nobody else at the partner knows where to find the customer's code.

And then there's the case where customers switch GP partners, and 2 years after the switch they realize that they don't have the source code for a GP customization or integration.  This happens all the time.

So, I give my source code to my customers.  If I'm working with a partner, I send the source code to the project manager after each release.  If I'm working with a customer, I usually send the source code to someone in IT.

"But gasp!  It's YOUR intellectual property!  You legally own the code!  Why would you give away your source code!", someone might ask.

My customers don't hire me to type funny words and inscrutable symbols into a development tool.  They aren't paying me to accumulate priceless intellectual property (that actually has no commercial value).  And they definitely aren't hiring me so that I can hold them hostage when they upgrade Dynamics GP or switch partners and need an updated version of their integration or customization.

I like to think that my customers hire me because I provide them with value, and the value I provide isn't a Git repository or zip file containing source code.  Sure, the code has value, but there are far better developers out there they could hire if all they needed was a coder.

Can you guess what happens when I give away my code to my customers?  When I send off that email with a link to download my precious source code?

Absolutely. Nothing.

Nothing happens.  It's completely uneventful.

Customers don't hire a cheaper developer.  Customers don't stop hiring me.  Customers don't try and maintain the code themselves so that they don't have to pay me.  Sometimes they consider supporting the code, but then they realize they don't want to maintain another project.  The last thing an overworked IT department wants is to inherit someone else's code that involves the ERP system and debits and credits--no joke.

But can you guess what happens over time?

Any questions about "code ownership" disappear.  Concerns over access to source code disappear.  Tensions around risk and critical dependencies dissolve.  Partners and customers feel more comfortable.  I think I have better relationships with my customers as a result.

And take a guess what happens when the partner or customer encounter situations where another developer won't provide the source code.  They notice.  They really notice.

"Wait a minute. Steve sends me his source code regularly and it's no big deal. But this other developer refuses to send me his source code."

When this happens, the partner is not happy and the customer feels exposed or at risk. It changes the relationship.

So strangely, giving away my code has become a small competitive advantage.  Tentative prospects sometimes ask me, "Steve, will this project include delivery of source code?"  Absolutely!  Once I deliver the code for the first release, concerns disappear.

Yes, I'm sure there are caveats for some organizations or some situations, and lawyers will gladly argue about IP and contracts and liability and blah blah blah for $500 an hour, but in the real world, the mid-market companies I work with are just looking to get things done.  The last thing anyone wants is to involve a lawyer, and nobody wants to worry or even care about source code.  I'm not building the next Azure or Axapta or VC backed killer app.

So when I hear stories about a GP partner who refuses to give source code to a customer who has switched to a new partner, I seriously wonder, "What in the world are they thinking?"

In the meantime, I'm working on making my customers happy.

It seems to be working.





You can also find him on Twitter, YouTube, and Google+




Sample Dynamics GP eConnect XML for RM Apply (RMApplyType / taRMApply)

$
0
0
By Steve Endow

A customer asked for sample XML for the RMApplyType / taRMApply eConnect transaction type.  I couldn't find one handy during a search, so I had to cobble together some .NET code and generate the XML.

I'm wondering if there is an easier way to generate the sample eConnect XML.  In theory, eConnect Requester with the eConnect Outgoing Service can send certain XML documents to MSMQ, but that is a hassle to setup properly, and I don't know that all transaction types are supported by eConnect Requester--such RMApplyType.


So, here is a sample Dynamics GP eConnect XML document for RM Apply (RMApplyType / taRMApply)


<?xml version="1.0"?>
<eConnect xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
            <RMApplyType>
                        <eConnectProcessInfo xsi:nil="true" />
                        <taRequesterTrxDisabler_Items xsi:nil="true" />
                        <taAnalyticsDistribution_Items xsi:nil="true" />
                        <taRMApply>
                                    <APTODCNM>SALES100001</APTODCNM>
                                    <APFRDCNM>PYMNT100001</APFRDCNM>
                                    <APPTOAMT>123.45</APPTOAMT>
                                    <APFRDCTY>9</APFRDCTY>
                                    <APTODCTY>1</APTODCTY>
                                    <DISTKNAM>0</DISTKNAM>
                                    <APPLYDATE>2017-04-12</APPLYDATE>
                                    <GLPOSTDT>2017-04-12</GLPOSTDT>
                        </taRMApply>
            </RMApplyType>
</eConnect>


Discount Taken, Apply Date, and GL Post Date are optional, and do not have to be assigned or included in the XML if they are not needed.

Here is the eConnect documentation on taRMApply.


"Can you add just one more little feature?": A Story About Software and Home Improvement

$
0
0
By Steve Endow


Wife: "Steve, can you install an exhaust fan in the small bathroom?"
Steve:  "Sure, hunny, no problem. I just ordered the fan and I'll call Sam to install it."

Customer:  "Can you add this simple little feature to our application?"
Developer:  "Sure, no problem.  I'll get right on that."

No big deal.


Sam: "Steve, I cut a hole in the bathroom ceiling for the fan, but something is strange. There's an extra layer of drywall on the ceiling, and it's not attached properly and it's sagging."
Steve: "Hmmm, that doesn't look right. Let's remove the extra drywall and see what the prior homeowner was covering up."



Customer: "So how's that new simple little feature coming along?"
Developer: "Well, I looked through the code, and the original developers didn't design the software to handle this feature, so it's going to require some redesign of the customization."

I think the project scope just changed.


Sam: "Steve, I think I found what the extra drywall was covering up.  It looks like the old bath tub and old toilet on the second floor that we replaced 3 years ago were leaking."
Steve: "Okay, so there is probably some old water damage?"
Sam: "Well...I think it's a little more than that. Looks like lots of mold and some termite damage."



Customer:  "So when you say that it will require some redesign, what does that mean?"
Developer:  "Well, the customization wasn't designed to handle the new functionality, the database tables don't have a field to store data for the new feature, and the user interface doesn't have space for the new feature.  Those are the items I've found so far."

Requirements are shifting...and growing...




Steve: "So how's it looking?"
Sam: "Well, I removed the entire bathroom ceiling and one wall.  There's quite a bit of termite damage, so we're going to need to replace several studs and rebuild the wall."



Customer: "So we need to modify the database tables to store the new data, modify the user interface to handle the new data entry, and write a little bit of code?"
Developer: "Well, after additional review of the old code, I think some of the code needs to be rewritten to meet your new requirements."

The magnitude is just starting to sink in.


Sam:  "It looks like the water damage extends in the back wall behind the sink and vanity. I know you were trying to keep the vanity, but we have to rip it all out."
Steve:  (audible sigh) "Um...okay..."


Customer: "So how much of the code do you need to rewrite?"
Developer:  "I think it's going to be easier to just write new code from scratch. Retrofitting the existing code will take longer and we'll have to deal with other problems and limitations."

The full scope finally emerges.


Steve: "So this morning, this started as a simple project to install a bathroom fan.  But by lunch time, it turned into a complete gut and rebuild of the entire bathroom, down to the studs."
Sam: "Ya, pretty much."


Customer: "So you're saying in order to add this one feature, we need to rewrite the entire application from scratch?"
Developer:  "Ya, pretty much."

Acceptance.



Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+




Stop typing passwords...completely. Use a fingerprint reader and Windows Hello!

$
0
0
By Steve Endow

Many, many, many years ago I finally got tired of remembering all of my passwords, and started using an Excel file to track them. After a few years of that, I got tired of insecurely tracking passwords in Excel and started using RoboForm to manage my passwords.  It had a few rough edges way back then, but worked well enough and also worked on my BlackBerry (yup, it was a long time ago).  I now manage a few thousand logins and notes in RoboForm, and needless to say, it's pretty essential in my daily life.

So that's great.  But there are still a few passwords I am having to constantly type.  Every time I sit down at my desk, I have to login to Windows.  I've been doing it for so many years that it's second nature.  I don't even think twice about it--it's pure muscle memory.  Except when I mistype my password or don't realize that Caps Lock is on, and it takes me 3-4 tries.  Grrr.

The second password I am constantly typing is my RoboForm master password.  So when a web site needs a login and I tell RoboForm to handle it, RoboForm will sometimes prompt me to enter my master password if I've just unlocked my desktop or have been away for a few hours.  Again, I've been doing it for so many years that I don't even think about it.

Then came the iPhone fingerprint sensor called TouchID.  It has taken a few years to gain traction, but now I can use my fingerprint to unlock my phone, pay for my groceries, login to my banking apps, and...access the RoboForm iOS app.  It is absolutely fantastic.  Typing my long RoboForm master password on my phone was moderately painful, so being able to use TouchID to unlock RoboForm on my phone was a wonderful improvement.  Once you start using Touch ID, it becomes strange to see a password prompt on the iPhone.

Then, a few years ago, I bought a Surface Pro 4 (which I do not recommend, at all, long story).  While shopping for the Surface Pro 4, I didn't know anything about Windows Hello, and I didn't realize that the Surface Pro 4 had an infrared web cam that could be used for face recognition authentication with Windows Hello.  But when I saw that Microsoft offered a keyboard with an integrated fingerprint reader, I knew I wanted one.  I waited a few months until the keyboard with fingerprint reader was in stock before buying the SP4, and I'm glad I waited.

After a few dozen firmware updates and software fixes made the horrible SP4 minimally usable and allowed the keyboard to actually work, the fingerprint reader on the SP4 keyboard was great.  It was surprisingly fast and easy to use.  It was much faster and more reliable than the Windows Hello face recognition, so I ended up using the fingerprint reader quite a bit.

But I still kept typing in my RoboForm password on my laptop...until one day I was poking around in the RoboForm settings and I accidentally discovered that RoboForm supported fingerprint authentication!  Eureka!  I don't know when the support was added, but I wasn't about to complain.


I enabled the fingerprint support and like magic, RoboForm unlocked with a touch of my finger.  Wow.  This was YUGE.

Having suffered for a few years with the SP4, I finally gave up and bought a real laptop, a Lenovo X1 Carbon 2017, and was thrilled that it had an integrated fingerprint reader as a standard feature.  Having experienced how useful the reader was on the SP4, I was just as happy with it on the Lenovo X1.  And after installing RoboForm on the X1 Carbon, I enabled fingerprint support and was on my way.

So life was then grand in mobile-land.  My phone and laptop had seamless fingerprint authentication to login and authenticate with RoboForm.

Which made using my desktop painful.  I actually...had to... type... my... Windows... password... every... single... time...I sat down.  After being spoiled by my iPhone and my laptop, it felt like a complete anachronism to actually have to TYPE (gasp!) my password!  Barbaric!

I apparently started to get rusty and seemed to regularly mistype my password on my desktop.  I then had several cases where it took me 4 or 5 password attempts before realizing Caps Lock was on.  Ugh.  I felt like I was in the stone ages, where Minority Report style authentication didn't actually exist.  It was...unacceptable.

So I searched for desktop fingerprint readers for Windows.  And...I was underwhelmed.  I found one that looked legit, for about $100, but the reviews were very mixed, citing driver issues and reading that the company had apparently been acquired and that they seem to have disappeared.  After seeing the mixed reviews on other models, I gave up.

But after a few more weeks of password typing punishment, I tried again and figured I would reconsider the small mini fingerprint readers that seem to have been designed for laptops.  A few seemed okay, but again, mixed reviews.

After a few more searches, I found one that seemed legit, and seemed designed for Windows 10 Windows Hello authentication.  (there are probably a few others that work well, but caveat emptor and read the reviews)

https://www.amazon.com/gp/product/B06XG4MHFJ/


It was only $32 on Amazon and seemed to have pretty good reviews, so I gladly bought.  I plugged it in to my Windows 10 desktop, Windows automatically detected it and set it up, and then I added a fingerprint in Windows Hello.  I then enabled fingerprint support in RoboForm.

Based on my tests so far, it works great.  I can now unlock my desktop by very briefly touching the sensor with my finger.  And I no longer have to type my RoboForm master password, which is a huge, huge benefit.  Just like my iPhone and my laptop.  No more passwords.

To make it more accessible and easier to use, I plugged the fingerprint sensor into a USB extension cable and then attached that cable to the back of my keyboard with a little hot glue.  Now, whenever I need to login or enter a password, I just move my hand to the left side of my keyboard and give the sensor a quick touch.



It's quite surprising how fast it is, and it's much, much faster than typing my password.  In fact, I don't even have to press a key on my keyboard.  From the Windows lock screen, I can just touch the sensor and login.

Once I'm in Windows, when I need to unlock RoboForm, it's just a quick touch to the sensor. and it's unlocked.


If you aren't using fingerprint sensors on every device you own, I highly recommend it.  I now use fingerprints on my iPhone, iPad, laptop, and desktop and it's a huge convenience.  You don't realize what a hassle passwords are until you start using your fingerprint to authenticate.

It's taken me several years to use fingerprints on all of my devices, but I'm finally there and it's glorious.

Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+





Beware of UTC time zone on dates when importing data into Dynamics GP!

$
0
0
By Steve Endow

Prior to this year, I rarely had to deal with time zones when developing integrations for Dynamics GP.

The customer was typically using GP in a US time zone, the SQL Server was on premise in that time zone, and all of their data usually related to that same time zone.  Nice and simple.

Dynamics GP then introduced the DEX_ROW_TS field to several tables, and I would regularly forget that field used a UTC timestamp.  That was relatively minor and easy to work around.

But with the increasing popularity of Software As A Service (SaaS) platforms, I'm seeing more and more data that includes UTC timestamps.  I didn't think too much about this until today, when I found an issue with how a SaaS platform provided transaction dates in their export files.

Here is a sample data from a file that contains AP Invoices:

    2017-09-05T14:26:05Z

This is a typical date time value, provided in what I generically call "Zulu time" format.  Apparently this format is defined in ISO 8601.

The format includes date and time, separated by the letter T, with a Z at the end, indicating that the time is based on the UTC time zone.

So why do we care?

Until today, I didn't think much of it, as my C# .NET code converts the full date time string to a DateTime value based on the local time zone, something like this:

string docDate = header["invoice-date"].ToString().Trim();
DateTime invoiceDate;
success = DateTime.TryParse(docDate, out invoiceDate);
if (!success)
    {
        Log.Write("Failed to parse date for invoice " + docNumber + ": " + docDate, true);
    }

This seemed to work fine.

But after a few weeks of using this integration, the customer noticed that a few invoices appeared to have the incorrect date.  So an 8/1/2017 invoice would be dated 7/31/2017.  Weird.

Looking at the data this morning, I noticed this in the SaaS data file for the Invoice Date field:

2017-08-25T06:00:00Z
2017-08-21T06:00:00Z
2017-08-23T06:00:00Z


Do you see the problem?

The SaaS vendor is taking the invoice date that the user in Colorado enters, and is simply appending "T06:00:00Z" to the end of all of the invoice dates.

Why is that a problem?

Well, when a user in Colorado enters an invoice dated 8/25/2017, they want the invoice date to be 8/25/2017 (UTC-7 time zone).  When the SaaS vendor adds an arbitrary time stamp of 6am UTC time, my GP integration will dutifully convert that date into 8/24/2017 11pm Colorado time.

For invoices dated 8/25, that may not matter too much, but if the invoice is dated 9/1/2017, the date will get converted to 8/31/2017 and post to the wrong fiscal period.

To make things even more fun, I found that the SaaS vendor is also storing other dates in local time.

2017-09-05T08:24:36-07:00
2017-09-05T08:26:22-07:00
2017-09-05T08:28:13-07:00


So I have to be careful about which dates I convert from UTC to local time, and which ones I truncate the time to just get the date, and which ones are local time.  In theory, the .NET date parsing should handle the conversion properly, assuming the time zone is correct, but I now know that I have to keep an eye on the vendor data.

I will be contacting the vendor to have them fix the issue with the invoice dates--there is no good reason why they should be appending "T06:00:00Z" to dates.

Expect to see a lot more of this date format and related date issues as more customers adopt cloud-based solutions and services.



You can also find him on Twitter, YouTube, and Google+




SQL MAX vs TOP 1: Part 2! The Revenge of IV30500!

$
0
0
By Steve Endow

I just can't let it go.  I need to know.  I need answers.  I need to solve the mystery.  The riddle.  The enigma.

Why does the MAX function sometimes perform very poorly compared to TOP 1?

I really thought that "MAX vs TOP 1" was a simple question. An easy question.  A spend 10 seconds on Stack Overflow and get the answer type of question.

But I just couldn't leave it alone and had to go and test it for myself.  And open a veritable Pandora's Box of SQL riddles.

In Part 1 of this series, I delved into how MAX and TOP 1 behave in several random queries, and I ended on a query that showed MAX performing quite poorly compared to TOP 1.

After that video, I stumbled across an even simpler query that produced an even more dramatic performance difference, where MAX performed miserably.  But I couldn't figure out why.

In this video, I discuss what I learned about the query and the specific Index Scan that is causing the MAX query to performing so poorly. 


Here's the recap:

When querying some fields, such as the IV30500 POSTEDDT, with no WHERE clause, both MAX and TOP 1 perform virtually the same, with both using a very efficient Index Scan.  50% vs 50% relative costs.



But when I query the TRXSOURCE field, with no WHERE clause, MAX shows 100% relative cost, whereas TOP 1 shows 0% relative cost.



What???

The POSTEDDT query uses an Index Scan.  The TRXSOURCE query uses an Index Scan.  But for some reason with the TRXSOURCE query, MAX is much more costly.

I stared at this result for a few hours trying to figure out why.  I eventually found this tiny little detail.


Notice that the Actual Number of Rows for the MAX Index Scan is 147,316.  That's every row in the table.

By contrast, the TOP 1 Index Scan has Actual Number of Rows = 1.

What is going on?

For some reason, the MAX is having to scan the ENTIRE AK1IV30500 index.  It isn't getting much benefit from the index.

But why?

Unfortunately, I don't yet know.

"Maybe SQL is caching the query execution plan?"

Apparently not.  I tried DBCC FREEPROCCACHE, and saw no change.


"Maybe your statistics are stale and need to be updated?"

Nope.  I tried UPDATE STATISTICS WITH FULLSCAN.  No change


"C'mon Steve, clearly you need to re-index!"

Did that.  I tried DBREINDEX on the specific AK1IV30500 index, as well as the entire IV30500 table.  No change in the execution plan.


I didn't find any standard maintenance task that changed the behavior of the MAX Index Scan.

As a last resort, I used the Import/Export wizard to export all of the data from the IV30500 table into a new table that I called IV30500REBUILD.  I then ran scripts to create all of the same indexes on the REBUILD table, making it identical to the original IV30500 table.

I then ran the MAX and TOP 1 queries on the new REBUILD table.

And like magic, the MAX Index Scan returned just one row.


Same table structure.

Same data.

Same indexes.

But the Index Scan on the new REBUILD table behaves properly.

So there is apparently something about my IV30500 that is causing this problem, and rebuilding the entire table resolves it.  But rebuilding a table isn't exactly a typical SQL maintenance task, so it's not really a solution.

But this is way past my SQL skill level, so I don't yet know what conventional maintenance task might be able to achieve the same results.

I've asked for help from a true SQL expert, so I'm hoping that she will assist and help me figure out this mystery.




You can also find him on Twitter, YouTube, and Google+





Beware of MIN, MAX, and TOP in Dynamics GP SQL queries!

$
0
0
By Steve Endow

A few weeks ago I started some research to compare the performance of MAX vs. TOP(1) in SQL Server queries.

After finding some unexpected results, I created a second video showing some odd behavior of MAX and TOP on one particular Dynamics GP table.  At that time, I couldn't figure out what was causing the performance issue with the MAX function.

Well, thanks to some very generous help and amazing insight from Kendra Little, I finally have a definitive explanation for the performance issue.




After looking at the IV30500 table and sample queries using MAX and TOP 1, Kendra quickly noticed that the table had ANSI NULLs turned off.  I explained some history of Dynamics GP and it's older database design quirks, and she pondered the performance issue further.

The next morning, she had found the issue.  She sent me this query to check the ANSI_PADDING settings on the char fields in the IV30500 table.


select OBJECT_NAME(object_id) AS TableName, name AS ColumnName, is_ansi_padded, * from sys.columns
where object_id = OBJECT_ID('IV30500') AND system_type_id = 175
GO



The query shows that is_ansi_padded = 0 for IV30500.  But if I run the same query on the IV30500REBUILD table, which was created by SQL Server when I exported data out of IV30500, I get different results.

select OBJECT_NAME(object_id) AS TableName, name AS ColumnName, is_ansi_padded, * from sys.columns
where object_id = OBJECT_ID('IV30500REBUILD') AND system_type_id = 175
GO


So why does this matter?

Well, to understand that, you have to learn a bit more about the ANSI_PADDING setting in SQL Server and how that affects SQL queries.  I still don't fully understand the details, but here are some references in case you want to learn more:

https://docs.microsoft.com/en-us/sql/t-sql/statements/set-ansi-padding-transact-sql

https://support.microsoft.com/en-us/help/316626/inf-how-sql-server-compares-strings-with-trailing-spaces

https://technet.microsoft.com/en-us/library/ms187403(v=sql.105).aspx


The key sentence from the third link is:
When ANSI_PADDING set to OFF, queries that involve MIN, MAX, or TOP on character columns might be slower than in SQL Server 2000.

Apparently when ANSI PADDING is off for a field, when it needs to evaluate the field for a query, it must sometimes pad spaces on the end of every single row value before it can perform a comparison.  As a result it must scan every row in the table, or in an index, before it can fulfill the query.

This is what I was seeing with the MAX function on the TRXSORCE field of IV30500.  Every single record was being returned by the Index Scan.

The TOP 1 operator apparently does not have to perform this same padding operation, so it is able to simply retrieve one row from the index and return it.

This is a pretty big deal.

While I'm guessing that MIN and MAX aren't widely used in Dynamics GP queries, there are certainly some situations where they would be useful.  If they are used on a table with a large number of rows, the performance hit will be significant.

In some cases, using TOP 1 may help, but as the key sentence above states, even the TOP operator may trigger the same performance issue in some queries.


If you are using MIN, MAX, or TOP and see an Index Scan in your execution plan that returns every row in the table, that may be a sign that you are encountering this issue.

In the video above, I show one way to modify a specific field to turn ANSI_PADDING on, but I don't know that I would recommend this in a production Dynamics GP environment.  It may work fine, but you'll have to be careful to perform the update after every Dynamics GP release or service pack, as any tables re-scripted by Dynamics GP will likely revert back to having ANSI_PADDING off.

And that, mercifully, finally solves the riddle of the poor performance of MAX in Dynamics GP.


Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+






My Typical Password: Is a 25 character minimum passphrase policy possible?

$
0
0
By Steve Endow

If you haven't read my prior post about passwords, perhaps read that first:

https://dynamicsgpland.blogspot.com/2016/10/how-do-you-choose-your-passwords-and.html


My "Passphrase Generator" has been working great since then. It isn't perfect, but has been working well enough for me.

I thought I was doing all the "right" things by using my passphrase generator and using a password manager religiously.

Using words with a max length of 7 characters, 2 numbers, and 1 symbol, I have been generating passphrases like:

Briony%4Cobwebs4    (16 chars)
Hyped/5Umber1    (13 chars)
Reecho%6Touzled8    (16 chars)
Tisanes#4Tangles6    (17 chars)


I considered these pretty safe passwords.

But I recently started listening to Kevin Mitnick's book, The Art of Invisibility, on Audible. In that book, Mitnick recommends that people now use passphrases of at least 25 characters.

25 characters?!?!?  (interrobang)

That's crazy!

But is it?

Those of us who work with Dynamics GP regularly bemoan the 15 character limit for passwords, as many of our customers have encountered issues with this limit. The customer's IT security policy requires a minimum of 15 characters, and they eventually figure out that their 16+ character passwords don't work in Dynamics GP.


So obviously that rules out using 25+ character passwords for Dynamics GP.

But I'm pretty sure I've run into web sites that would not allow me to have anywhere near 25 characters.

There's only one way to find out.  I just reset my password on these web sites.  These weren't the limits of the site, just the length of the long passwords that I randomly generated for each and successfully saved.

Twitter:  35 characters
Stack Overflow:  35 characters
GPUG.com:  38 characters
Atlassian/Bitbucket:  36 characters

Wow, moving right along!  It looks like a 25 character minimum password might be possible!

(play record scratch here)

Then I login to my online banking web site.  Major bank.  Big bank.  Huge bank.  Not a relatively tiny web site like GPUG. 

And what do I see?


20 character max!  What??

Strike one!

Hmmm, let's check another bank web site.  I log in to a smaller bank that I use, but when I try to change the password...and...

...it doesn't allow me to paste in the password from my Passphrase Generator!

That is garbage!

Troy Hunt lays out this entire stupid fake "security" policy of not allowing password pasting in his excellent blog post here.

And I see that he shows examples of GE Capital and PayPal and others.

So that pretty much kills the idea of consistently using 25+ character passwords.

Could I use really long passwords on sites that allow them, and that allow pasting?  Sure.  And I may start doing that.

But clearly there are many sites, particularly the large ones, that have indefensible password length limitations and block the paste function.  So for those, you're limited to their arbitrarily short password lengths.

So I guess that answers my question.

With that said, will I use 30+ character passwords?  Not sure.

Occasionally I have to manually enter the password on a mobile device, and it is a nightmare to try and type that many characters in a password field.  I can barely compose a simple text message on my phone without making a typo, so my password typing success rate is not stellar.

But I may give it a try.  As I reset my passwords going forward, I'll try and use a 25+ character passphrase and see how it goes.

Hopefully some day my bank will allow more than 20 characters.




You can also find him on Twitter, YouTube, and Google+


My First SQL Saturday event: It was amazing

$
0
0
By Steve Endow

The Microsoft SQL Server community is amazing.

Amazing.

That's not an exaggeration or platitude.

On Saturday, I attended my first "SQL Saturday" event in Orange County, California.  I left speechless.


Several hundred people attended the event at a local college.  On a Saturday.  I overheard one attendee say that she woke up at 4am and had a 2+ hour drive from San Diego to attend. Presenters flew in from all over the country to speak at the event, with several speakers facing a snow storm and flight cancellations trying to return home.  They did this, without compensation, on a Saturday.  And some were planning on attending up to 10 other SQL Saturday events across the country.

And I should mention that the event was free for attendees.  Completely free.

When I arrived at 8am, there was a line of 40 or 50 people waiting to check in.  There were lots of volunteers helping people check in, handing out tote bags, re-printing passes, setting up tables, and preparing the event.  Before the first session started, they had setup tables with gallons of free coffee, bagels, danishes, and donuts.

The event is organized by PASS, a non-profit organization that helps support people who use Microsoft data technologies.

Ten companies sponsored the SQL Saturday event, which has the following mission statement:

Our Mission
The PASS SQLSaturday program provides the tools and knowledge needed for groups and event leaders to organize and host a free day of training for SQL Server professionals. At the local event level, SQLSaturday events:
  • Encourage increased membership for the local user group
  • Provide local SQL Server professionals with excellent training and networking opportunities
  • Help develop, grow, and encourage new speakers
When I signed up, I didn't know what to expect.  I thought it might be a casual user-group style meeting with a few speakers.  But it was much more like a full fledged, single day, intense SQL Server conference.

Several of the speakers that I saw were simply amazing.

Here are the sessions that I attended:

  1. SQL Database and Query Design for Application Developers
  2. Azure Basics for the DBA
  3. PowerShell for the SQL DBA
  4. Spotlight on SQL Server by Quest Software (vendor presentation)
  5. Data Pages, Allocation Units, IAM Chains
  6. The Query Store and SQL Tuning
  7. Fundamentals That Will Improve Query Performance


The 6 educational sessions were incredible.  I felt I knew a fair amount about 3 of the topics, but still learned a ton in those sessions.  And the 3 sessions with topics that were new to me had so much valuable content that I was dizzy by the time the session ended.  For example, I learned how the data is structured inside of an 8K data page--down to the byte!  WHAT?!?!

I took pages of notes on my iPad during most of the sessions, as they were all offering real world knowledge, experience, anecdotes, and lessons about how to use different SQL Server features and tools.

It was 6 solid hours of high quality content presented by SQL Server experts.  It was intense, valuable learning, and I was tired at the end of the day.

It was amazing.

If you work with SQL Server and have an opportunity to attend a SQL Saturday event, I recommend it.



You can also find him on Twitter, YouTube, and Google+




The Challenge of Posting Dates with Automated Dynamics GP Imports

$
0
0
By Steve Endow

If you are familiar with Dynamics GP, you are likely familiar with the confusion that can be caused by the "Posting Date" feature.  Many customers have never opened the additional transaction date window in GP.


A customer calls and asks, "Why did my April invoice post to the GL in March? The invoice is clearly dated April 5!"

In addition to the confusion between Document Date and Posting Date, there is also the potential confusion caused by Transaction Posting Date vs. Batch Posting Date.


As if that isn't enough fun related to dates, things can get particularly interesting and challenging with automated integrations with Dynamics GP.

The issue typically comes up during month end.  If an April 15 invoice is posted to April 16, it is usually not an issue and nobody notices.  But if a March 31 invoice is posted to April 1, that can cause issues.

When a user is entering transactions manually in GP, they can review the invoice, know whether it should be posted to March or April, and set the posting date accordingly.  But when an automated integration is importing data, it usually doesn't know which fiscal period a transaction belongs to.  It has to rely upon a data field in the source data to tell it what the posting date should be.

That sounds easy enough, right?

Unfortunately, it isn't always easy.


Above is some sample data from an integration.  A single invoice date is provided in the DOCDATE column.  And a Batch ID of 2018-04-20 is provided, implying that the transactions are related to April 20.  From this information, you could reasonably assume that the transactions should post to the 2018-04 fiscal period.

But what about this sample data.


This morning a concerned and upset customer called me asking "Why did our April invoices post to March??"

The batch ID of "20180401" indicated that these were April invoices and not March invoices.  But as we know, Dynamics GP doesn't care about the batch ID when it comes to posting.  The only date that matters is the Posting Date.

"But we don't import a posting date with our invoices. Only the Invoice (document) date!", the customer responded.

Good point.  The source data only contained DOCDATE, and their SmartConnect map was only setup to import the invoice Document Date field.

So why did all of their invoices in the 20180401 batch get posted to March 31?

Well, as I mentioned above, you have to know whether GP is configured to post using the transaction posting date or the batch posting date.  And to keep things confusing, it is possible to configure some batches to post using the transaction posting date, and have other batches post using a batch posting date.

So using the sample data above, why did the 20180401 batch post to March 31?

When importing transactions using eConnect (or SmartConnect, etc.), if the Batch ID specified for the transaction does not exits, eConnect will create the batch automatically.  You don't need to specify additional options--it will just handle it for you.

And when your Dynamics GP batch type is set to use the Batch Posting Date, guess what eConnect uses as the default value for the Batch Posting Date?  The document date.

So in the above sample data, the first invoice that is imported has a Document Date of March 31.  So eConnect dutifully creates a new batch with a posting date of March 31.  It then imports the invoices into that batch.  And all of the invoices in that batch will post to March 31.  Even if the invoice date is April 1.

Okay, so the customer just needs to fix the March 31 dates, right?

Perhaps it may be that simple.  Maybe there was a bug in their source data.

But what about invoices that are generated on April 1, but related to March?  What about a vendor invoice dated April 2 that is received from an external AP system on April 3, but was for a service performed in March?  An integration won't know the invoice should be posted to March--the source data would have to provide an additional clue, such as a separate Posting Date or Fiscal Period field.

I've only encountered a few customers who were able to supply that fiscal period field separate from the document date field.  In my experience, it is not common for a source system to know the fiscal period for a transaction--most only have a single transaction date.

So when designing a transaction import for Dynamics GP, make sure to consider what happens when transactions are dated the last day of the month or first day of the month, and whether transactions related to a prior fiscal period may show up in your source data.  It can be surprisingly tricky.


Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+







Get Next Dynamics GP RM Payment Number Using SQL

$
0
0
By Steve Endow

When you import transactions into Dynamics GP, you often need to get the next transaction number or document number from Dynamics GP.

In some simple cases, you can leave the document number field blank and let eConnect get the next number for you, but if you are sending in distributions or Analytical Accounting data for a transaction, you need to assign a document number to those elements before sending the transaction off to eConnect.

eConnect does have a method to generate the next number, but there's a big catch: it requires Windows authentication to connect to SQL and get the next document number.  This works for some situations where you will be using Windows Authentication for your integration, but I have many situations where only a SQL or GP login will be available.

In those cases, you can usually directly call the underlying eConnect stored procedures.  The problem with this approach is figuring out which stored procedure to call and how to call it.  You'd be surprised how challenging this can be, and every time I have to do it, I have to go find some old code because I can't seem to find the correct eConnect stored proc.

Case in point is the process for generating the next RM payment (cash receipt) number.  I looked and looked for the eConnect taRM procedure, but couldn't find it.  Why?  Because it's inconsistently named "taGetPaymentNumber".  Ugh.


I couldn't find anything via Google on this, so, to document this lovely process, here is my C# code for getting the next RM Payment Number using the eConnect stored procedure.


        public static bool GetNextRMPaymentNumber(string gpDatabase, ref string nextPayment)
        {
            //eConnect method, which uses Windows auth
            //Microsoft.Dynamics.GP.eConnect.GetNextDocNumbers nextDoc = new Microsoft.Dynamics.GP.eConnect.GetNextDocNumbers();
            //string nextRMPayment = nextDoc.GetNextRMNumber(Microsoft.Dynamics.GP.eConnect.IncrementDecrement.Increment, Microsoft.Dynamics.GP.eConnect.RMPaymentType.RMPayments, ConnectionStringWindows(gpDatabase));
            //return nextRMPayment;

            //SQL method
            string commandText = "taGetPaymentNumber";

            SqlParameter[] sqlParameters = new SqlParameter[4];
            sqlParameters[0] = new SqlParameter("@I_vDOCTYPE", System.Data.SqlDbType.TinyInt);
            sqlParameters[0].Value = 9;  //9 = Payment
            sqlParameters[1] = new SqlParameter("@I_vInc_Dec", System.Data.SqlDbType.TinyInt);
            sqlParameters[1].Value = 1;  //1 = Increment
            sqlParameters[2] = new SqlParameter("@O_vDOCNumber", System.Data.SqlDbType.VarChar, 21);
            sqlParameters[2].Direction = ParameterDirection.InputOutput;
            sqlParameters[2].Value = string.Empty;
            sqlParameters[3] = new SqlParameter("@O_iErrorState", System.Data.SqlDbType.Int);
            sqlParameters[3].Direction = ParameterDirection.InputOutput;
            sqlParameters[3].Value = 0;

            int recordCount = DataAccess.ExecuteNonQuery(gpDatabase, CommandType.StoredProcedure, commandText, sqlParameters);

            if (int.Parse(sqlParameters[3].Value.ToString()) == 0)
            {
                if (sqlParameters[2].Value.ToString().Trim() != string.Empty)
                {
                    nextPayment = sqlParameters[2].Value.ToString().Trim();
                    return true;
                }
                else
                {
                    nextPayment = string.Empty;
                    return false;
                }
            }
            else
            {
                return false;
            }

        }


I prefer using this method, as it will work whether I am using Windows Auth or SQL auth.



Steve Endow is a Microsoft MVP in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+


Consulting is never boring!

$
0
0
By Steve Endow

Today I had to switch between several tasks, and during one of those task switches, my brain put on the brakes.


My brain:  This is crazy!

Me:  What is crazy?

Brain:  This!  This is crazy!  Switching from SQL queries to Dynamics GP VS Tools to an ASP.NET Core web API for Dynamics GP to working with multiple Azure services.  And that's just in the last hour!  It's nuts!

Me:  Uh, hello, we do this every day. So that you're not bored, remember?

Brain:  Dude, that doesn't make it any less crazy.

Me:  Noted.  I'll blog about it just to make you feel better.


If I actually stop for a moment, step back, and look at all of the things I do, all of the tools I use, and all of the things I have to know and understand to do my job, it is kinda crazy.

If you're modest, you might think that this is fairly normal, which in some respects it is--lots of people probably do what you do in the consulting world.  But if you want to really appreciate how much you really know, try hiring a 20 year old intern and give them a few small projects.  You'll quickly realize that the "simple" task you gave the intern requires tons of fundamental knowledge that informs how to perform the task.  It probably took you years to develop that fundamental knowledge, and then many more years on top of that to develop competence or mastery.



Let's start with SQL.  In the Dynamics GP consulting world, a basic understanding of SQL Server and T-SQL is pretty much essential. 

"Hey intern, can you run this query?"

"What's a query?"

"It's a way to get data out of SQL Server."

"SQL Server?"

"Yes, SQL Server is a relational database."

"Relational?"

"Nevermind, just launch Management Studio and connect to the GP SQL instance"

"GP SQL instance?"


These steps may seems obvious, and is probably invisible to you if you've been doing it for years, but every single step requires an entire fundamental skill stack to perform even a basic task.

So you need to know how to work with Management Studio.  How to connect to a SQL instance.  How to write some T-SQL.  It's a good to understand SQL databases, tables, stored procedures, and views.  Maybe triggers and cursors if you're daring.  And how about backups and transaction logs and Recovery Model, just in case there's a problem?

If you're on the bleeding edge, you'll know how to backup SQL Server databases to Azure.  Which means you should be familiar with SQL jobs and Azure Storage and backup compression.  And Azure is an entire universe of knowledge.

But back to Dynamics GP.  How about SET files and dictionaries and chunk files and shared dictionaries and modified forms and reports and AddIns and Modifier & VBA?  And there's all the knowledge around GL, AP, AR, SOP, POP, and IV, not to mention the other ancillary modules like AA, PA, FA, MC, CM, IC, HR, UPR, and others.  You know that one checkbox under Tools -> Setup -> Posting -> Posting?  Ya, that one that affects whether transaction posting hits the GL?  Or what about that option in the SOP Type ID that affects inventory allocation and quantity overrides?  Or the hundreds of other options you kinda need to be aware of?

And naturally, since you're working with an accounting system, it's good to understand debits vs credits and income statement vs balance sheet and cash vs income vs expenses vs assets vs liabilities.  And if you're into reporting, there's the entire universe of standard reporting tools and financial reporting tools.

In my particular line of work, I also need to understanding everything from Excel macros to VBA to VB Script to Integration Manager to eConnect to SmartConnect.  I need to know how to use .NET 2.5 through .NET Core 2.0 using Visual Studio 2010 through 2017.  I need to thoroughly understand IIS and Kestrel, TCP/IP, ports, firewalls, DNS, HTTPS, TLS, SSH, and nmap.  I need to know HMAC, AES, and SHA and have a fairly good understanding encryption.

I need to be able to glance at XML and JSON and quickly find data issues.  I need to know what HTTP verbs and response codes mean, as well as what "idempotent" means (that's actually a word).  I need to understand TXT and CSV parsing and the issues related to using Excel files as data sources.  I need to be able to review thousands of entries in a log file and figure out why two identical requests were processed 3 milliseconds apart, and that's only after I figure out how to reliably log activity with millisecond precision.

I need to understand PCI compliance and how to call credit card gateway APIs for CC and ACH tokenization and transaction processing.  And then there's the TLS 1.2 upgrade saga--don't get me started on that one.

And while writing some complex queries years ago, I needed to figure out why they were taking hours to run.  So I had to give myself a crash course SQL query optimization so that I didn't kill the SQL Server.  Which led to me developing a subspeciality in amateur SQL Server optimization, which can be quite challenging in the Dynamics GP world.  And if you're dealing with GP performance, it's helpful to understand virtualization and be familiar with Hyper-V and VMWare and how VM memory settings affect SQL Server.

And the list goes on and on.  It's a really, really long list of stuff you need to learn and know and understand and use on a regular basis.


It's kinda crazy.

But that's also why I like it. 




You can also find him on Twitter, YouTube, and Google+





Viewing all 240 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>