Closed Bug 745002 Opened 13 years ago Closed 13 years ago

Build Proof of Concept Unit Tests for Apps In the Cloud Infrastructure

Categories

(Web Apps Graveyard :: AppsInTheCloud, defect)

x86
macOS
defect
Not set
normal

Tracking

(Not tracked)

RESOLVED DUPLICATE of bug 750948

People

(Reporter: onecyrenus, Unassigned)

Details

Unit tests are required for the Apps in the cloud infrastructure.
David - Could you provide more context to this bug? Is this meant to lay the foundation for the unit testing framework for AITC? Specific unit tests on the other hand, should be applied on a per feature/bug basis (that's my understanding at least, according to the test framework components).
I believe this is a task tracking bug (most likely). If there are unit tests that come out of the effort they would show up on anant's github account with the rest of the code. Could you provide an example of what you mean ? -- A bug with the setup as you are explaining above ? -- What feature / component break down would you think to be appropriate.?
The test infrastructure (for unit tests atleast) is already in place. There is only one AITC client bug open right now. In my opinion, it is unnecessary overhead to create multiple bugs for each feature for an initial code drop such as the one in bug 744257 (which will be quite large). Let's check in as many tests as we can initially and follow up with bugs after we've managed to land core functionality.
(In reply to dclarke@mozilla.com from comment #2) > I believe this is a task tracking bug (most likely). > > If there are unit tests that come out of the effort they would show up on > anant's github account with the rest of the code. > > Could you provide an example of what you mean ? > -- A bug with the setup as you are explaining above ? > -- What feature / component break down would you think to be appropriate.? Sounds good. This sounds similar to https://bugzilla.mozilla.org/show_bug.cgi?id=733631. Maybe the right approach to word this is to ensure that the infrastructure is in place with a proof of concept unit test (something like Mohamed did in bug 690493). The goal here is to prove that unit testing is possible in the current infrastructure in the short-term. After you get past that hurdle, then evolve past it on a per feature basis using tracker bugs, so that implementing features comes with the expectation of implementing unit tests as one whole. My suggestion for this bug as a starting point: "Build proof of concept unit test for AITC implementation with required infrastructure in place" With something defined like what is stated above in the short term, it will help with direction for the immediate short-term goal towards the critical path. After the first proof of concept is built to close this bug, then move forward with more unit tests on a per feature/bug basis. An example of this used is in bug 716127 that the geolocation guys did. After the bug was filed, they implemented the fix, along in correlation with the actual unit test that went with it. (In reply to Anant Narayanan [:anant] from comment #3) > The test infrastructure (for unit tests atleast) is already in place. There > is only one AITC client bug open right now. Right. Per a discussion in IRC, I'm opening up sub-bugs for sub-tasks to provide more granularity on the tasks required to meet the requirements for the tracking bug. > In my opinion, it is unnecessary overhead to create multiple bugs for each > feature for an initial code drop such as the one in bug 744257 (which will > be quite large). Let's check in as many tests as we can initially and follow > up with bugs after we've managed to land core functionality. If the feature is large though, wouldn't that provide more of a need to break down the task? There's ambiguity and churn right now with requirements we have not thought about. The quicker we can identify them up front in the form of a work breakdown structure, the better we get a picture of what the initial picture of the implementation is. The concern I have is that we need to keep an eye out to ensure that AITC implementation does not break existing functionality, consistently maps up with existing implementations, etc. In terms of this bug, the underlying concern was just that I think there needs to be a "start" and an "end" to a task. Check in many tests for example - How many is acceptable? What's the goal to complete this task? Generally, just trying to keep in mind where the start and end is.
(In reply to Jason Smith from comment #4) > > In my opinion, it is unnecessary overhead to create multiple bugs for each > > feature for an initial code drop such as the one in bug 744257 (which will > > be quite large). Let's check in as many tests as we can initially and follow > > up with bugs after we've managed to land core functionality. > > If the feature is large though, wouldn't that provide more of a need to > break down the task? There's ambiguity and churn right now with requirements > we have not thought about. The quicker we can identify them up front in the > form of a work breakdown structure, the better we get a picture of what the > initial picture of the implementation is. The concern I have is that we need > to keep an eye out to ensure that AITC implementation does not break > existing functionality, consistently maps up with existing implementations, > etc. In terms of this bug, the underlying concern was just that I think > there needs to be a "start" and an "end" to a task. Check in many tests for > example - How many is acceptable? What's the goal to complete this task? > Generally, just trying to keep in mind where the start and end is. I just had a discussion with David and the conclusion was that the AITC functionality is of such nature that it is close to impossible to write any meaningful tests for the client in the short term. The core functionality of AITC depends on talking to external servers (BrowserID, the token server and the AITC server itself), but none of the XPCShell tests in Firefox communicate with external servers -- and for good reason, you don't want the tree to be red just because a server went down. In light of the above, I recommend that we drop the idea of trying to land automated tests before the next tree window (April 24) and instead focus on manually testing functionality. I will draft up a detailed document that outlines what steps to take to manually verify functionality that we have promised to deliver with the first version of the AITC client for Firefox Desktop. We will turn our attention back to automated tests for the next window which gives us 6 extra weeks to do the job right. As far as breaking existing functionality goes, we will rely on try pushes to ensure that the AITC client patch in bug 744257 does not cause any regressions. I will note that WebRT functionality for Firefox Desktop is landing in bug 725408 with a similar process, even though that functionality is arguably much more complex than AITC, it is being done without the need for splitting it into several different bugs.
(In reply to Anant Narayanan [:anant] from comment #5) > I will note that WebRT functionality for Firefox Desktop is landing in bug > 725408 with a similar process, even though that functionality is arguably > much more complex than AITC, it is being done without the need for splitting > it into several different bugs. To clarify, I mean that there is no need to split the bug for the client code itself, but I certainly have nothing against creating different bugs for the tests associated with it (for which this bug is the tracker).
(In reply to Anant Narayanan [:anant] from comment #5) > I just had a discussion with David and the conclusion was that the AITC > functionality is of such nature that it is close to impossible to write any > meaningful tests for the client in the short term. The core functionality of > AITC depends on talking to external servers (BrowserID, the token server and > the AITC server itself), but none of the XPCShell tests in Firefox > communicate with external servers -- and for good reason, you don't want the > tree to be red just because a server went down. > > In light of the above, I recommend that we drop the idea of trying to land > automated tests before the next tree window (April 24) and instead focus on > manually testing functionality. I will draft up a detailed document that > outlines what steps to take to manually verify functionality that we have > promised to deliver with the first version of the AITC client for Firefox > Desktop. We will turn our attention back to automated tests for the next > window which gives us 6 extra weeks to do the job right. Makes sense. We're tight on time as is (April 24th is getting closer). Note that Aaron Train (Mobile) and I (Desktop) can assist with doing a manual test plan, so long as I have an understanding of what's getting implemented. Assistance though will be greatly appreciated, given that a timeframe for testing is minimal. I do question is the minimized test timeframe is acceptable, but I'll keep that discussion on a separate thread I'll be sending out via email soon. > As far as breaking existing functionality goes, we will rely on try pushes > to ensure that the AITC client patch in bug 744257 does not cause any > regressions. Sounds good. > I will note that WebRT functionality for Firefox Desktop is landing in bug > 725408 with a similar process, even though that functionality is arguably > much more complex than AITC, it is being done without the need for splitting > it into several different bugs. True, although we've noticed overtime that the feature was far larger than we initially expected, given that we didn't break it down significantly. Task breakdown can help to give a better picture of a scope of an implementation. I do think it that implementation we've had a problem with understanding the implementation scope, especially cause I still don't have a build to test, and it's April 12th :(.
Test cases at the unit level, or the functional level, may not be directly correlated at the feature level. Given that the timeframe is so short, I would prefer to track this at a higher level, and then report up what is covered, and or continuously track progress. Doing any sort of breakdown by feature / test cases may be premature as the test framework still needs to be investigated, and solutions examined. Also I think we have to figure out where we want to apply the correct level of granularity. Based upon the number of people, time commits, and the task at hand.
(In reply to dclarke@mozilla.com from comment #8) > Test cases at the unit level, or the functional level, may not be directly > correlated at the feature level. I don't agree. When features/bugs are getting implemented, the implementation are correlated to underlying unit/functional tests. Please see the V model for more information (http://en.wikipedia.org/wiki/V-Model_%28software_development%29). > > Given that the timeframe is so short, I would prefer to track this at a > higher level, and then report up what is covered, and or continuously track > progress. The problem with this bug right now is there is no "end" to this bug. There needs to be a start and end point to a bug, not a bug that can go on forever. What determines that this task is finished? Tasks that have no end are not a good idea, as it creates confusion as to what needs to be done, the scope of the work, etc. > > Doing any sort of breakdown by feature / test cases may be premature as the > test framework still needs to be investigated, and solutions examined. True for right now this is the case. The goal may be think of what is the immediate "first step" that needs to be done. In this case, I would say laying the foundation for a proof of concept would be the immediate goal <-- this is would be the proper task to track. Later down the line, evolve to new bugs as needed. > > Also I think we have to figure out where we want to apply the correct level > of granularity. Based upon the number of people, time commits, and the task > at hand. True we need to agree on this. There does need to be an "end" to a bug though. A bug without an end cannot be finished. I do think this bug should be resolved as incomplete though if we have no clear end to this bug.
The implementation is correlated to the underlying unit/functional tests, but there isn't necessarily a one-to-one mapping. Such that X tests would correlate to testing one feature explicitly. A specific test, can explicitly test for a feature, but implicit in that test, is the testing of other more basic mechanisms. I guess I . Trying to achieve a one test mapping to each feature will lead to test automation bloat. The bug is a task tracker, and the task is to write automation that tests apps in the cloud. As far as I can tell that is all the clarity I feel comfortable with at this point. The bug can always be renamed as more data becomes available ? The immediate first step would be investigation, but that doesn't have an immediate end goal. We could have a bug for "laying the foundation" but I'd imagine that would happen after the investigation, and we'd hang bugs off of this as we gain more knowledge.
(In reply to dclarke@mozilla.com from comment #10) > > The bug is a task tracker, and the task is to write automation that tests > apps in the cloud. As far as I can tell that is all the clarity I feel > comfortable with at this point. The bug can always be renamed as more data > becomes available? > > The immediate first step would be investigation, but that doesn't have an > immediate end goal. We could have a bug for "laying the foundation" but I'd > imagine that would happen after the investigation, and we'd hang bugs off of > this as we gain more knowledge. Here's my suggestion - An investigation to get to laying the foundation might be a better starting point. That falls in line with the approach Felipe used (I liked his approach). In other words, I'd close this bug as "incomplete," open a new bug with the following title: "Build proof of concept unit test for apps in the cloud client API" Include the investigation there. That gives a good starting goal. When we come up with more goals post that, let's add on that.
Summary: Unit tests for Apps In the Cloud Infrastructure → Build Proof of Concept Unit Test for Apps In the Cloud Infrastructure
Summary: Build Proof of Concept Unit Test for Apps In the Cloud Infrastructure → Build Proof of Concept Unit Tests for Apps In the Cloud Infrastructure
For tracking purposes, let's continue in bug 750948, as it gives a clear definition of what needs to be done, and it targets the same bug.
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → DUPLICATE
Product: Web Apps → Web Apps Graveyard
You need to log in before you can comment on or make changes to this bug.