Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


Channel Description:

Christina Phillips, an MCT with The Knaster Technology Group, and Steve Endow, an MCT and owner of Precipio Services, maintain this blog as a way to encourage collaboration and knowledge sharing between Dynamics GP consultants, trainers, and end-users.

older | 1 | .... | 8 | 9 | (Page 10)

    0 0

    By Steve Endow

    I'm a huge fan of the Veeam Backup and Replication product.  I've used it for several years now to backup my Hyper-V virtual machines to a Synology NAS, and it has been a huge improvement over the low-tech script based VM backups I was suffering with previously.

    One quirk I have noticed with Veeam is that it seems to be very sensitive to any loss of connectivity with the backup infrastructure.  With a prior version, if Veeam was running but my file server was shut down, I would get error notifications indicating that it couldn't access the file server--even though backups were not scheduled to run.  I haven't noticed those messages lately, so I'm not sure if I just turned them off, or if I haven't been paying attention to them.

    Since I don't need my servers running 24x7, I have them scheduled to shutdown in the evening, and then automatically turn on in the morning.  But sometimes if I wrap up my day early, I may shut down all of my servers and my desktop at, say, 8pm.  If I shut down my Synology NAS first, and Veeam detects that the file server is not accessible, it may log a warning or error.

    Normally, this isn't a big deal, but I found one situation where this results in a subsequent error message.  I recently tried to restore a VM, and after I selected the VM to restore and chose a restore point, I received this error message.

    Veeam Error:  Backup files are unavailable


    When I first saw this message I was concerned there was a problem, but it didn't make sense because Veeam was obviously able to see the backup files and it even let me choose which restore point I wanted.  So I knew that the backup files were available and were accessible.

    I could access the network share on my NAS file server and browse the files without issue.  I was able to click on OK to this error message, complete the restore wizard, and successfully restore my VM.  So clearly the backup files were accessible and there wasn't really an issue.

    So why was this error occurring?

    I submitted a support case to Veeam and spoke with a support engineer who showed me how to resolve this error.  It seems that whenever Veeam is unable to access the file share used in the Backup Infrastructure setting, it sets a flag or an error state in Veeam to indicate that the backup location is not available.  After this happens, you have to manually tell Veeam to re-scan the backup infrastructure in order to clear the error. Fortunately, this is very simple and easy.

    In Veeam, click on the Backup Infrastructure button in the bottom left, then click on the Backup Repositories page.  Right click on the Backup Repository that is giving the error, and select Rescan.


    The Rescan will take several seconds to run, and when it is done, the "Backup files are unavailable" message will no longer appear when you perform a restore.  Or at least that worked for me.

    Overall, I'm incredibly pleased with Veeam Backup and Replication and would highly recommend it if it's within your budget.


    You can also find him on Twitter, YouTube, and Google+









    0 0

    By Steve Endow

    I have two new projects that require web service APIs, so rather than actually use a tried and true tool that I am familiar with to develop these new projects, I am plunging into the dark depths of ASP.NET Core.

    If you've played with ASP.NET Core, you may have noticed that Microsoft has decided that everything you have learned previously about developing web apps and web services should be discarded, making all of your prior knowledge and experience worthless.  And if you choose to venture in to new world of ASP.NET Core, you will be rewarded by not knowing how to do anything.   At all.  Awesome, can't wait!

    One of those things that you'll likely need to re-learn from scratch is logging.  ASP.NET Core has a native logging framework, so rather than write your own or use a third party logging package, you can now use a built-in logger.  This sounds good, right?

    Not so fast.  At this point, I have come to understand that nothing is easy or obvious with ASP.NET Core.

    This article provides a basic overview showing how to perform logging in ASP.NET Core.

    https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging

    One thing it doesn't clearly explain is that if you want to have your logs capture Information level entries, it will quickly be filled with hundreds of entries from the ASP.NET Core engine / web server itself.  You will literally be unable to find your application entries in the log file if you log at the Information level.



    So the article helpfully points out that ILoggerFactory supports filtering, allowing you to specify that you only want warnings or errors from the Microsoft tools/products, while logging Information or even Debug messages from your application.

    You just add this .WithFilter section to your startup.cs Configure method:

    loggerFactory .WithFilter(new FilterLoggerSettings { { "Microsoft", LogLevel.Warning }, { "System", LogLevel.Warning }, { "ToDoApi", LogLevel.Debug } })


    Cool, that looks easy enough.

    Except after I add that to my code, I see the red squigglies of doom:


    Visual Studio 2017 is indicating that it doesn't recognize FilterLoggerSettings. At all.



    Based on my experience with VS 2017 so far, it seems that it has lost the ability (that existed in VS 2015) to identify missing NuGet packages.  If you already have a NuGet package installed, it can detect that you need to add a using statement to your class, but if you don't have the NuGet package installed, it can't help you.  Hopefully this functionality is added back to VS 2017 in a service pack.

    After many Google searches, I finally found this StackOverflow thread, and hidden in one of the post comments, someone helpfully notes that the WithFilter extension requires a separate NuGet package, Microsoft.Extensions.Logging.Filter.  If you didn't know that, you'd spend 15 minutes of frustration, like I did, wondering why the very simple Microsoft code sample doesn't work.

    Once you add the Microsoft.Extensions.Logging.Filter NuGet package to your project, Visual Studio will recognize both WithFilter and FilterLoggerSettings.

    And here is my log file with an Information and Warning message, but no ASP.NET Core messages.


    And several wasted hours later, I am now able to read my log file and actually work on the real project code.

    Best of luck with ASP.NET Core.  You'll need it.


    You can also find him on Twitter, YouTube, and Google+





    0 0

    By Steve Endow

    A very handy feature in Visual Studio is the Comment / Uncomment editing option.

    There are two buttons that allow you to comment or uncomment code with a single click.


    While those buttons are handy, they require you to use the mouse, and that can sometimes be tedious if you are having to also make multiple code selections with the mouse.

    Visual Studio does have keyboard shortcuts for Comment and Uncomment, but they are the unfortunate double-shortcut combinations:  Ctrl+K, Ctrl+C to comment, and Ctrl+K, Ctrl+U to uncomment.

    I find those shortcuts to be pretty annoying, as they require me to use both hands to press those key combinations.  It's not much of a "shortcut".

    After several years of this nagging me, I finally bothered to lookup a better alternative.  Fortunately Visual Studio allows you to add your own keyboard shortcuts.  If you click on Tools -> Options, and then select Environment -> Keyboard, you can select a command and assign a new keyboard shortcut.

    The one challenge is finding a decent keyboard shortcut that isn't already taken.

    I entered the word "comment" and it displayed the relevant commands.  I then selected Edit.CommentSelection, selected Use new shortcut in Text Editor, pressed Alt+C, then clicked Assign.

    Now I can comment a selection using the nice and simple Alt+C shortcut.  Big improvement.


    I don't Uncomment as much, so for now I haven't assigned a custom shortcut to Edit.Uncomment, but at least I now know it's very easy to do.

    Keep on coding...and commenting...



    You can also find him on Twitter, YouTube, and Google+






    0 0

    By Steve Endow

    Anyone who has written SQL queries, integrated, or otherwise had to deal with Dynamics GP data certainly has warm feelings about the colonial era use of char data type for all string fields.

    This has the lovely side effect of returning string values with trailing spaces that you invariably have to deal with in your query, report, application, XML, JSON, etc.

    In the world of SQL queries, you can spot a Dynamics GP consultant a mile away by their prolific use of the RTRIM function in SQL queries.  .NET developers will similarly have Trim() statements thoroughly coating their data access code.

    But in this bold new age of Microsoft development tools, where everything you have spent years learning and mastering is thrown out the window, those very simple solutions aren't readily available.

    I am developing an ASP.NET Core web API for Dynamics GP, and being a sucker for punishment, I'm also using EF Core for data access.  In one sense, EF Core is like magic--you just create some entities, point it to your database, and presto, you've got data.  Zero SQL.  That's great and all if you have a nice, modern, clean, well designed database that might actually use the space age varchar data type.

    But when you're dealing with a relic like a Dynamics GP database, EF Core has some shortcomings.  It isn't really designed to speak to a prehistoric database.  Skipping past the obvious hassles, like exposing the cryptic Dynamics GP field names, one thing you'll notice is that it dutifully spits out the char field values with trailing spaces in all of their glory.

    When you convert that to JSON, you get this impolite response:

    "itemnmbr": "100XLG                         ",
    "itemdesc": "Green Phone                                                                                          ",

    "itmshnam": "Phone          "


    Yes, they're just spaces, and it's JSON--not a report output, so it's not the end of the world.  But in addition to looking like a mess, the spaces are useless, bloat the response, and may have to be trimmed by the consumer to ensure no issues on the other end.

    So I just spent a few hours trying to figure out how to deal with this.  Yes, SpaceX is able to land freaking rockets on a floating barge in the middle of the ocean, while I'm having to figure out how to get rid of trailing spaces.  Sadly, I'm not the only one--this is a common issue for many people.

    So how can we potentially deal with this?

    1. Tell EF Core to trim the trailing spaces.  As far as I can tell, this isn't possible as of June 2017 (v1.1.1).  EF Core apparently doesn't have a mechanism to call a trim function, or any function, at the field level. It looks like even the full EF 6.1+ framework didn't support this, and you had to write your own code to handle it--and that code doesn't appear to work in EF Core as far as I can tell.

    2. Tell ASP.NET Core to trim the trailing spaces, somewhere, somehow.  There may be a way to do this in some JSON formatter option, but I couldn't find any clues as to how.  If someone has a clever way to do this, I'm all ears, and I'll buy you a round at the next GP conference.

    3. Use the Trim function in your class properties.  Ugh.  No.  This would involve using the old school method of adding backer fields to your DTO class properties and using the Trim function on every field. This is annoying in any situation, but to even propose this with ASP.NET Core and EF Core seems like sacrilege.  And if you have used scaffolding to build out your classes from an existing database, this is just crazy talk.  I'm not going to add hundreds of backer fields to hundreds of string properties and add hundreds of Trim calls.  Nope.

    4. Use an extension method or a helper class.  This is what I ended up doing.  This solution may seem somewhat obvious, but in the world of ASP.NET Core and EF Core, this feels like putting wagon wheels on a Tesla.  It's one step up from adding Trim in your classes, but looping through object properties and trimming every field is far from high tech.  Fortunately it was relatively painless, requires very minimal code changes, and is very easy to rip out if a better method comes along.

    There are many ways to implement this, but I used the code from this post:

    https://stackoverflow.com/questions/7726714/trim-all-string-properties


    I created a new static class called TrimString, and I added the static method to the class.

        publicstaticclassTrimStrings
        {
            //https://stackoverflow.com/questions/7726714/trim-all-string-properties
            publicstaticTSelf TrimStringProperties<TSelf>(thisTSelf input)
            {
                var stringProperties = input.GetType().GetProperties()
                    .Where(p => p.PropertyType == typeof(string));

                foreach (var stringProperty in stringProperties)
                {
                    string currentValue = (string)stringProperty.GetValue(input, null);
                    if (currentValue != null)
                        stringProperty.SetValue(input, currentValue.Trim(), null);
                }
                return input;
            }
        }



    I then modified my controller to call TrimStringProperties before returning my DTO object.

        var item = _itemRepository.GetItem(Itemnmbr);

        if (item == null)
        {
            return NotFound();
        }

        var itemResult = Mapper.Map<ItemDto>(item);

        itemResult = TrimStrings.TrimStringProperties<ItemDto>(itemResult);


        return Ok(itemResult);


    And the new JSON output:

    {
      "itemnmbr": "100XLG",
      "itemdesc": "Green Phone",
      "itmshnam": "Phone",
      "itemtype": 1,

      "itmgedsc": "Phone",


    Fortunately this works, it's simple, and it's easy.  I guess that's all that I can ask for.



    You can also find him on Twitter, YouTube, and Google+





    0 0

    By Steve Endow

    If you have a .NET integration for Dynamics GP that uses the eConnect .NET assemblies, this is a fairly common error:

    Could not load file or assembly 'Microsoft.Dynamics.GP.eConnect, Version=11.0.0.0

    Could not load file or assembly 'Microsoft.Dynamics.GP.eConnect, Version=12.0.0.0

    Could not load file or assembly 'Microsoft.Dynamics.GP.eConnect, Version=14.0.0.0


    This usually indicates that the integration was compiled with an older (or different) version of the eConnect .NET assemblies.

    Why does this happen?

    In my experience, there are two situations where you will usually see this.

    1. You upgraded Dynamics GP to a new version, but forgot to update your .NET eConnect integrations.  For instance, if you upgraded from GP 2013 to GP 2016, you would see the "Version 12" error message when you run your integration, as the integration is still trying to find the GP 2013 version of eConnect.

    2. You are working with an application or product that is available for multiple versions of GP, and the version you have installed doesn't match your GP version


    The good news is that this is simple to resolve.  In the first case, the developer just needs to update the Visual Studio project to point to the proper version of the eConnect DLLs.  Updating the .NET project shouldn't take very long--maybe 1-4 hours to update and test, depending on the complexity of the integration.  Or if you're using product, you just need to get the version of the integration that matches your GP version.

    If you have a custom .NET integration, the potential bad news is that you, or your developer, or your GP partner, needs to have the .NET source code to update the integration.  Some customers encounter this error when they upgrade to a new version of GP, and realize that the developer who wrote the code left the company 3 years ago and they don't know where the source code might be.  Some customers change GP partners and didn't get a copy of the source code from their prior partner.

    If you can't get a copy of the source code, it is theoretically possible to decompile most .NET applications to get some or most of the source code, but in my limited experience as a novice user of such tools, decompilation just doesn't provide a full .NET project that can be easily updated and recompiled.  Or if it does, the code is often barely readable, and would be very difficult to maintain without a rewrite.


    You can also find him on Twitter, YouTube, and Google+





older | 1 | .... | 8 | 9 | (Page 10)