-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pester tests needed per comments in #33 #35
Comments
What I'd be inclined to do is modify the test scripts so that the environment specific hard codes are variables at the top of the file(s), and then before the commit set them to generics lime example.com or such. |
Possible, but not very portable. When adding or modifying tests that would complicate commits and merges, and would likely result in someone (probably me) inadvertently committing private settings. I was thinking include default/template XML or JSON file(s) that needn't change unless the tests change. End users could copy the file(s) to $HOME and edit theirs independently. In the absence of said file(s), the Pester script could throw a terminating exception with guidance. |
I think there are a lot of improvements desired in this, but https://github.com/elijahgagne/POSH-LTM-Rest.Tests/ is my first stab at this. |
I was thinking the F5Session(s) could be serialized to the same json file as the rest of the configuration. Something along the lines of: {
"Sessions": [
{
"Name": "N.N.N.N",
"BaseURL": "https://N.N.N.N/mgmt/tm/ltm/",
"Credential": {
"UserName": "vercellone",
"Password": "[encrypted securestring]"
}
},
{
"Name": "N.N.N.N",
"BaseURL": "https://N.N.N.N/mgmt/tm/ltm/",
"Credential": {
"UserName": "vercellone",
"Password": "[encrypted securestring]"
}
}
],
"Partition1": "partition1",
"etc..": "etc..."
} Encrypted secure strings can only be decrypted by the same windows user, so it still makes sense to me to have user specific config files somewhere under the $HOME folder. But the location isn't critical. We could use an environment variable to allow overriding the default location. If you give me a couple of days, I'll generalize my tests and write a first draft of the json I believe necessary to support them. Even though I have 93 tests, my code coverage is still low (about 40%). So, there's plenty of work left to be done. |
I like the structure of your json config file and I like the idea of storing an encrypted password in the config file. We use a privileged account manager (PAM) to store and manage the AD credential that I use to connect to the LTMs. That password gets randomized multiple times a day. I think a solution that could work is that we have an optional script that runs at the beginning of the Pester script (if this optional script exists). I could use this script to get my credential, encrypt it, and store it in the config file. Other people could use it to prompt for input and then make arbitrary changes to the config file based on input. People who didn't need this flexibility could ignore this script. |
Edited my previous comments to change $PSHOME to $HOME to more accurately reflect my intent of storing the configuration somewhere under the user's home directory (i.e. c:\Users\username) by default. Sorry for any confusion that might have caused. |
I intended to refine this further before sharing, but I just gave in and checked in a first draft to a new branch (pester). In it I've added 2 files:
F5-LTM.Tests.ps1 includes 2 helper functions:
Proposed discussion points:
|
Thanks for working on this. Here are some comments:
|
I agree with elijahgagne's comments. JSON seems a good format to me. That's my preferred format for config files, these days. [BEGIN EDIT] If, on the other hand, the goal is to provide integration tests, then I'd be in favor of a JSON-formatted config file, with an obvious sample file in source, and the understanding that all users must create their own, custom config. [END EDIT] |
ReasonsFlexibility
Familiarity
Thoughts? |
Both dot-sourcing a PS script or using a JSON file works for me. I agree that a PS script is easier and more flexible, although I would say that a JSON file feels cleaner. I do not have strong feelings either way and would be happy to do it which ever way you and everyone else prefer. |
I'm also okay with either JSON or dot-sourcing PS. It feels like some good thought and time has already gone into this endeavor, and I'd hate to see it lose momentum. |
Commit 566bfb3 is available as a preview. It is not PR worthy yet, but IMHO it is a much cleaner solution than JSON. I'll follow up with a sample test case creation script soon. |
Commit bbe943e
Questions, concerns, or suggestions? |
Thanks for working on this. I just got back from vacation. I'll give it a try tomorrow or Tuesday. |
Firstly, I think there is a lot of good work here and I've learned a few things reading through it. I still need to work through this more and I hope to be able to provide better feedback soon. But here are some initial thoughts/suggestions:
|
Thanks for the feedback. Looking forward to improving this greatly. Hopefully my comments below will prove helpful.
$Sessions = @{}
$Sessions.Add('Internal', (New-F5Session -LTMName '192.168.1.20' -LTMCredentials $credentials -Default -PassThrough))
$F5LTMTestCases.Get_HealthMonitor += @{ session = 'Internal' }
|
Yes, that was helpful. Thanks, I'm now using the environment variable to specify the test location. I am liking things more as I get to know it better. I can tell you put a lot of work into this. I currently have 43 passing tests and 17 failing ones. I think most of the failing ones are because of assumptions that are not true in my environment. Some of those are going to be tricky to work through and this is not the thread to try to do that in. One thing I have started doing to make things less repetitious for me is defining variables at the top that are then used multiple times below. One question, for a couple test, I could only get them to succeed using a fullpath, but I'm not sure that should have been required. For example, with Get_VirtualServer_ByNameArray, you have dummy values of name = '[name1]','[name2]','[name3]'. Is the expectation that just the name and not the fullpath should be sufficient? I am happy with this approach and would be happy to help continue building it out. |
Get-VirtualServer (like all All Get-* commands that retrieve elements in partitions) uses Get-ItemPath which defaults to the Common partition if neither a fullpath nor a partition are specified. If (like @joel74) all elements are in the Common partition, then it should work fine. In that respect it is working as designed. However, if it offers greater clarity, we could have any combination of Get_VirtualServer_ByNameArray, Get_VirtualServer_ByFullPathArray. and/or Get_VirtualServer_ByNameArrayAndPartition. Or, simply extend Get-VirtualServerByNameArray to allow an optional Partition param. That said, consider the following: Hashtables in -Testcases collections can include extraneous key/value pairs that don't map to named test case params. The extraneous pairs are essentially ignored. That could be leveraged to provide some powerful shortcuts. For example: $MyVirtualServers = @()
$MyVirtualServers += @{ session = 'Internal'; name = 'app1-test.dyn-intl.com'; partition = 'Development'; fullpath = '/Development/app1-test.dyn-intl.com' }
$MyVirtualServers += @{ session = 'Internal'; name = 'app2-test.dyn-intl.com'; partition = 'Development'; fullpath = '/Development/app2-test.dyn-intl.com' }
$F5LTMTestCases.Get_VirtualServer_ByNameAndPartition =
$F5LTMTestCases.Get_VirtualServer_ByPartition =
$F5LTMTestCases.Get_VirtualServer =
$F5LTMTestCases.Get_VirtualServer_ByFullpath = $MyVirtualServers
# PipeLine and Array test cases could also be derived from $MyVirtualServers That could lead to a shorthand, where you set all the possible attributes of your virtualservers in one collection and the test permutations could be derived from it automatically. Just food for thought at this point. |
What do you think of the following syntax? Test permutations based on the same elements could be automatically derived or declared staticly. F5Test 'Get-HealthMonitor' {
F5Session 'Internal' -Permutations Dynamic {
@{ name = 'http'; partition = 'Common' }
}
F5Session 'External' -Permutations Static{
@{ partition = 'Common' }
@{ name = 'http'; partition = 'Common' }
}
} |
I've begun experimenting with 'Unit' testing using Mock New-F5Session and Invoke-RestMethodOverride Functions. It looks very promising for maximizing code coverage without requiring F5 device connectivity. The trick to this strategy is the implementation of the Mock Invoke-ResetMethodOverride function. Does this sound like a worthy pursuit? Any other ideas? |
In regards to the syntax, I'm having trouble understanding. I'm thinking that would replace the objects created in F5-LTM.TestCases.ps1, right? I do like that better than what's in F5-LTM.TestCases.ps1 as it seems more clear to me how I'd fill things in. I'm also trying to understand Dynamic vs Static permutations. Static seems straight forward. Should I think of Dynamic as a set of objects that can then be decomposed and then the parts of the object (e.g. just the name instead of the name+partition) could be tested? If that's true, then I'm wondering how you would write the tests for that. Dynamic makes it easier to write the object to test, but harder to understand what tests will be done against the objects (if I've understood). Unit tests using mock New-F5Session and Invoke-RestMethodOverride sounds very interesting. My first thought is, how are you going to implement mock versions of those that behave like the real thing. That said, I am really worried about testing things like deleting a virtual server. I don't currently have access to a non-prod LTM environment so my testing is going against a live environment, which means I need to be really careful. I have heard that my organization has a license that allows me to spin up a LTM vmdk and create my own testing environment. It would be even nicer if I could get that in a Docker container and run different versions of the LTMs. Given how much work we're putting into creating this useful tool for managing BIGIP devices, I wonder if we could get any help from BIGIP in having test environments for the purpose of testing this module. |
That syntax would serve as a Domain Specific Language that gets translated automatically into $F5LTMTestCases at test run time. Your understanding of permutations is correct. Static would be 1 test case per hashtable declared. Dynamic would build as many permutations of the test cases as the properties supplied allow. The tests themselves wouldn't really change. There would just be a translation step that builds $F5LTMTestCases from the DSL. Your concern regarding understanding which test will be done is valid. That could possibly be addressed either using ValidationSets or Pester Tags to opt in to certain permutations. My first draft of the Mock Invoke-RestMethodOverride implementation is reading a json file from disk for a given Uri, Method, and LTMVersion. I hope it will be possible to simplify this in some cases. Negative result tests (i.e. 404 Not Found) should be easy. I haven't tried it yet, but I'm thinking I will enable some sort of tracing on my copy of Invoke-RestMethodOverride so that I can capture a large number of known good responses just by running my current integration tests. Unfortunately, commands that vary depending on the request body (i.e. New-, Set-) may require a more imaginative solution. In the absence of a dedicated testing environment risk could be mitigated by only performing New/Set/Remove functions on elements created by the test process itself. But, the risk would not be eliminated entirely in the event a test misbehaves. A test environment for integration testing would be ideal. Perhaps @joel74 can pursue that further. |
It seems like a lot of the risk with integration tests might be able to be mitigated by utilizing a dedicated Pester partition, that's built up and torn down as part of the testing. It feels like the best way to perform the tests would be as integration tests on a real BIG-IP, otherwise the amount of effort in mocking up all the objects and methods needed to simulate the functionality would be overwhelming. Jason, do you have more details about your first draft of the Mock Invoke-RestMethodOverride that you can share? |
Demonstrates mocking for behavior verification testing of Get-HealthMonitor.
I do. Finally. My integration test efforts focus largely on state verification. Verify that the properties of the objects returned match those expected for the given request parameters. My more recent unit test efforts sought to Mock the behavior of the F5 REST API, including dynamically simulating the output of Invoke-RestMethodOverride to support essentially the same state verification utilized for integration tests. Feasible in theory, but ultimately (as Joel suspected) overwhelming. It involves a ton of technical debt in terms of both raw data and conditional logic. And, although highly flexible, key permutations would quickly get buried. e24bad7 aims to alleviate those shortcomings by instead focusing on behavior verification. It is much cleaner, and IMO will serve to better highlight edge case permutations such as those necessitated by variations between device versions. Looking forward to feedback. |
Intended for inclusion in previous commit e24bad7.
This looks pretty amazing to me! I did not realize Pester had this capability. I like how this provides a safe way to run tests where I don't have to worry about a bad test messing up production. |
Thanks, @elijahgagne. Would you be willing to contribute some similarly structured unit tests for other functions? I've considering using the Assert-MockCalled -ParameterFilter to verify the requested URI. Any thoughts on that? |
I think this looks a lot cleaner and more manageable than the previous iteration, but I'm still trying to understand the end game here. Since the F5 session and external function calls are being mocked, then we're looking for unit test functionality, and testing whether the logic internal to a given function acts as expected. Is that correct? In this health monitor example, what's being tested is the correctness of our parameter sets, whether the function accepts an object from the pipeline, and whether it can properly add the HealthMonitor object property. There doesn't appear to be any other logic to test in this instance. This is definitely helpful for ensuring our functions work as expected, like when there is internal manipulation of JSON objects, but I'm not clear on how this helps with testing against different versions of a BIG-IP device. A large part of the functionality of this module involves communicating with a BIG-IP device, and if we're going to mock up all that communication, we have to know how all supported versions of a BIG-IP device would respond to any given call, which seems like a lot more work than just actually creating an integration-style test and making an actual call. What am I missing in regards to using this to test on different versions of the API? My recent efforts have been around what's involved in making the F5-LTM module a child module of a parent F5-BIGIP module, in an attempt to ensure that the module(s) we're developing here follow the architecture of the BIG-IP modules. I was able to move the F5-LTM module with only minor modifications into this new project, though I have some work yet to do to get ArgumentCompletion.ps1 functioning in this new setup. I'm going to continue in those efforts, and as more pester tests become available, I plan to keep that new F5-BIGIP project in sync with the F5-LTM project. Cheers, |
Yes, I can spend some time this weekend on unit tests for other functions. Unfortunately I don't have a lot of free time right now so I can't promise much. Re: Assert-MockCalled -ParameterFilter, I'll need to get a better understanding of things so I'm sure I'll have some thoughts after this weekend. |
@joel74 Correct: These are unit tests. Get-HealthMonitor is the Subject Under Test (SUT). There is no version specific behavior for it. I've added tests for New-F5Session that should help clarify my vision for edge case permutations (version specific in this case). @elijahgagne I've added a single -ParameterFilter on 1 of the Get-HealthMonitor tests to demonstrate that as well. (Note that $Uri -match '[regex]' might be more suitable for some tests. And $Body -match '[regex]' will likely be useful for New/Set operations.) c4eb502 Code coverage is currently 92.59% for New-F5Session and 83.33% for Get-HealthMonitor. Keep the feedback coming. |
I got setup for this and read up on mocking, but didn't get much further than that so far. I'll work on it some this week. I'm planning to start with Get-iRule. |
Foundational Pester unit tests for #35
Reference: PR #33
I currently have 93 Pester tests in my working folder, but I haven't yet worked out how best to share them. Right now I have a lot of stuff specific to my environment that is more or less hard coded. To make it dynamic for others, it either needs to read test input from a file outside the repository ($Home perhaps), or it needs to create a bunch of arbitrary items temporarily. The latter will require some New-* or Set-* functions that don't yet exist.
Anyone have suggestions or willing to help?
The text was updated successfully, but these errors were encountered: